[extropy-chat] Is Telepathy a safer route?

Anders Sandberg asa at nada.kth.se
Wed May 24 21:55:28 UTC 2006


A B wrote:
>   I think that one of the greatest dangers of super-intelligence is the
> distinct possibility that when it emerges (even if as an upload), it
> will be completely unrivaled; there will be only a single mind with that
> awesome power, rather than several or many of comparable intelligence
> and differing intentions.

Why do you think this? If we look at technological projects in the world
the unrivalled ones are the ones taking lots of resources, requiring some
very unique competence or not regarded as interesting by many. Projects
that look useful, even when only partially successful, tend to get a lot
of parallell work. Superinteligence would seem to be something like that.
I can imagine a race to develop and market smarter AI, intelligence
amplification or whatever it is, producing a world where the most super
intelligence just leads a power law trail of other intelligences. The main
argument against this is the exponential self-amplification idea,
suggesting that there are economies of scale of intelligence. But I have
not yet seen any convincing arguments for this claim. Overall, finding out
the dynamics of accelerating intelligence (whether spikish or swellish) is
an interesting methodological problem.


>   A collective "meat-machine" super-intelligence would consist of many
> distinct minds, values, and interests. It's collective "circle of
> empathy" (Jaron Lanier) would likely be huge. No single individual from
> within the collective would be significantly more intelligent than any
> other member, and so no specific "world view" would dominate any others.
> And psychopaths could presumably be screened from the group. It would be
> kind of like a meaty version of Mr. Yudkowsky's "CEV".

Isn't this just a description of a society?

High bandwidth communication does seem to help a society. A group of
people have a productivity that scales with its size, reduced by the
overhead of communication. Enhancing individuals increases the society
result proportionally. Enhancing synergies between them increases the
result with the square of their size. More efficient coordination allows
larger groups, that can reach larger optimal sizes.

I think we might very well end up with this kind of telepathic
superintelligent society, but it would not necessarily act as a *being*. A
lot of superintelligence talk assumes great minds to be beings, but they
could just as well be something as non-agentlike as an economy or Google.

Also, the best way of taming superintelligences is to make sure they are
part of society and unwilling to oppose it. Friendly superintelligences
want to be there for emotional reasons, rational selfish
superintelligences may be motivated by economic benefits of infrastructure
and comparative advantage and most superintelligences will of course be
rooted in the human/posthuman culture that spawned them. The telepathic
supermind would IMHO likely end up containing the SIs too.

Maybe one could do an analysis of this a la Nozicks analysis of the
(imaginary) formation of societies in "Anarchy, State & Utopia", checking
what the cruicial ethical and practical points are that would ensure that
SIs would join rather than oppose society. And I think one could steal his
argument from the start to argue that people would love to join the nice
borganism.

http://angryflower.com/borg27.gif


-- 
Anders Sandberg,
Oxford Uehiro Centre for Practical Ethics
Philosophy Faculty of Oxford University





More information about the extropy-chat mailing list