<DIV>Hi Anders,</DIV> <DIV> </DIV> <DIV>I was assuming the "...exponential self-amplification idea..." in my post. I made that assumption because I had read about several current AGI projects that are intending a "Seed AI". I don't personally know nearly enough to judge the feasibility of "Seed AI", so I can't really comment on that. I strongly agree that an open, free-market system would be a great way to *hopefully* create a smooth continuum of intelligence levels, so that the brightest mind would just barely exceed the second runner up. But this result seems dependent on "Seed AI" definitely not working. </DIV> <DIV> </DIV> <DIV>Best Wishes,</DIV> <DIV> </DIV> <DIV>Jeffrey Herrlich<BR><BR><B><I>Anders Sandberg <asa@nada.kth.se></I></B> wrote:</DIV> <BLOCKQUOTE class=replbq style="PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #1010ff 2px solid"><BR>A B wrote:<BR>> I think that one of the greatest dangers of
super-intelligence is the<BR>> distinct possibility that when it emerges (even if as an upload), it<BR>> will be completely unrivaled; there will be only a single mind with that<BR>> awesome power, rather than several or many of comparable intelligence<BR>> and differing intentions.<BR><BR>Why do you think this? If we look at technological projects in the world<BR>the unrivalled ones are the ones taking lots of resources, requiring some<BR>very unique competence or not regarded as interesting by many. Projects<BR>that look useful, even when only partially successful, tend to get a lot<BR>of parallell work. Superinteligence would seem to be something like that.<BR>I can imagine a race to develop and market smarter AI, intelligence<BR>amplification or whatever it is, producing a world where the most super<BR>intelligence just leads a power law trail of other intelligences. The main<BR>argument against this is the exponential self-amplification idea,<BR>suggesting
that there are economies of scale of intelligence. But I have<BR>not yet seen any convincing arguments for this claim. Overall, finding out<BR>the dynamics of accelerating intelligence (whether spikish or swellish) is<BR>an interesting methodological problem.<BR><BR><BR>> A collective "meat-machine" super-intelligence would consist of many<BR>> distinct minds, values, and interests. It's collective "circle of<BR>> empathy" (Jaron Lanier) would likely be huge. No single individual from<BR>> within the collective would be significantly more intelligent than any<BR>> other member, and so no specific "world view" would dominate any others.<BR>> And psychopaths could presumably be screened from the group. It would be<BR>> kind of like a meaty version of Mr. Yudkowsky's "CEV".<BR><BR>Isn't this just a description of a society?<BR><BR>High bandwidth communication does seem to help a society. A group of<BR>people have a productivity that scales with its size,
reduced by the<BR>overhead of communication. Enhancing individuals increases the society<BR>result proportionally. Enhancing synergies between them increases the<BR>result with the square of their size. More efficient coordination allows<BR>larger groups, that can reach larger optimal sizes.<BR><BR>I think we might very well end up with this kind of telepathic<BR>superintelligent society, but it would not necessarily act as a *being*. A<BR>lot of superintelligence talk assumes great minds to be beings, but they<BR>could just as well be something as non-agentlike as an economy or Google.<BR><BR>Also, the best way of taming superintelligences is to make sure they are<BR>part of society and unwilling to oppose it. Friendly superintelligences<BR>want to be there for emotional reasons, rational selfish<BR>superintelligences may be motivated by economic benefits of infrastructure<BR>and comparative advantage and most superintelligences will of course be<BR>rooted in the
human/posthuman culture that spawned them. The telepathic<BR>supermind would IMHO likely end up containing the SIs too.<BR><BR>Maybe one could do an analysis of this a la Nozicks analysis of the<BR>(imaginary) formation of societies in "Anarchy, State & Utopia", checking<BR>what the cruicial ethical and practical points are that would ensure that<BR>SIs would join rather than oppose society. And I think one could steal his<BR>argument from the start to argue that people would love to join the nice<BR>borganism.<BR><BR>http://angryflower.com/borg27.gif<BR><BR><BR>-- <BR>Anders Sandberg,<BR>Oxford Uehiro Centre for Practical Ethics<BR>Philosophy Faculty of Oxford University<BR><BR><BR>_______________________________________________<BR>extropy-chat mailing list<BR>extropy-chat@lists.extropy.org<BR>http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat<BR></BLOCKQUOTE><BR><p>
<hr size=1>New Yahoo! Messenger with Voice. <a href="http://us.rd.yahoo.com/mail_us/taglines/postman5/*http://us.rd.yahoo.com/evt=39666/*http://messenger.yahoo.com">Call regular phones from your PC</a> and save big.