[ExI] Isn't Bostrom seriously bordering on the reactionary?

Stefano Vaj stefano.vaj at gmail.com
Fri Jun 17 14:26:01 UTC 2011


On 17 June 2011 14:27, Anders Sandberg <anders at aleph.se> wrote:

> We also have fairly good reasons to think that superintelligences will be
> good at achieving their goals (more or less the definition of intelligence),
> so if a hard takeoff happens a single goal or motivation system will direct
> most of the future. So far I have not seen any plausible argument for why
> such a goal would be likely to be human compatible (no, I don't buy Mark R.
> Wasers argument) and some mildly compelling arguments for why they are
> likely to be accidentally inimical to us (the mindspace argument, the risk
> from AGIs with single top goals, Omohundro drives - although that paper
> needs shoring up a lot!) That implies that hard takeoffs pose a possible
> xrisk unless the safe AGI requirements have been properly implemented.
>

Why superintelligences should have a single goal, as opposed to a
hypothetical multiple-goal nature of lesser intelligences? :-/

And how do we define "human" or how are we sure that we owe a loyalty to
such a fuzzy and anyway outdated category (see Foucault)?

Human (and by definition brain emulation) motivation is messy and
> unreliable, but also a fairly known factor. Software intelligence based on
> brain emulations also come with human-like motivations and interests as
> default, which means that human considerations will be carried into the
> future by an  emulation-derived civilization. The potential for a hard
> takeoff with brain emulations is much more limited than for AGI, since the
> architecture is messy and computationally expensive.


Mmhhh. If "goals" is to be anything more than a blatant, projected
anthropomorphism, of the likes of "the goals of river Mississipi" or "the
goals of my PC", I suspect (after, inter alia, Wolfram) that the AGI
exhibiting such feature amongst all the possible computing landscapes would
by definition the  emulation of a biological brain.

Accordingly, the real difference between an uploaded human and an AGI
stricto sensu would simply be that the former would emulate a specific human
being in a specific stage of his or her life, while the latter would emulate
a generic human being (or mammal, for that matter), ie, a patchwork of
arbitrary traits.

On the other hand, if we speak of even "supremely intelligent" entities that
are not specifically programmed to emulate us or other Darwinian machines, I
do not really see what "motivations" and "values" might mean in this context
that would not already applicable to any general computing device, including
cellular automata.

Of course, this does not imply in the least that a non-Darwinian
supercomputer, or even an iPhone for that matter, cannot be dangerous.

But this has in principle little to do with "motivations", and given that we
can develop higher and higher level interfaces obviating to the bandwith
bottleneck, I dare say that in terms of risks the "system" composed by a
human being with enough computing power at his or her fingertips is
practically indistinguishable from another system of equivalent power based
purely on silicon, irrespective of whether a specific human individual is
emulated or not.

I am thus inclined to think that the real issue is the risk represented by
our own possible finding ourselves at the mercy of somebody, or something,
with more computing power, where of course the obvious preventive measure is
developing a *superior* computing power. Not of being haunted by "humans vs
machines" visions  which can only be too easily deconstructed as a thinly
secularised resurgence of the Golem complex.

This means that brain emulations force a softer takeoff where single agents
> with single motivations are unlikely to become totally dominant over
> significant resources or problem-solving capacities.
>

Pluralism and diversity and molteplicity of chances are amongst my own
primary concerns, but it would seem paradoxical to interpret them to the
effect that they should lead to a scenario where a single (legal, social,
economic, moral or perhaps even more literal) system with a single
motivation should be globally implemented in order to enforce an order aimed
at preventing... single systems with single motivations to become dominant.

In fact, I would consider such a system exactly the materialisation of an
"existential" risk, namely with reference to the existence of an ongoing
process of self-overcoming which only makes us, or perhaps life in general,
an interesting phenomenon.

-- 
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110617/f8b5f409/attachment.html>


More information about the extropy-chat mailing list