[extropy-chat] IQ vs Uploads (was: what to do)

Dirk Bruere dirk at neopax.com
Mon Jun 13 19:10:03 UTC 2005


Mike Lorrey wrote:

>--- The Avantguardian <avantguardian2020 at yahoo.com> wrote:
>
>  
>
>>--- Robin Hanson <rhanson at gmu.edu> wrote:
>>
>>    
>>
>>>What most needs analysis are changes that are not
>>>captured in existing
>>>trends. IQ has been increasing and that has had
>>>effects for a long time.
>>>So all of the existing trend-based analysis already
>>>captures a big
>>>similar effect.  The effects of the upload
>>>transition are not, however,
>>>much captured in existing trends.
>>>      
>>>
>>Hey Robin,
>>
>>This is a facinating topic. Why don't you analyze and
>>compare the Flynn Effect with Moore's Law? I don't
>>know about the shape of Flynn's I.Q. curve vs. time
>>but if it is exponential rather than linear then it
>>opens up a very cool possibility. Since Moore's law is
>>exponential then it might come down to a race between
>>the Flynn effect vs. Moore's Law to see who/what will
>>dominate in years ahead: A.I. or the minds that CREATE
>>them.
>>    If the rate constant for the Flynn effect is
>>higher than for Moore's Law then no matter how fast
>>computers and software advance the human mind might be
>>able to keep pace or even lead. I mean after all, Deep
>>Blue might have beat Kasporov but who would you invite
>>to a cocktail party?
>>    
>>
>
>Yes, one conceptual mistake, I believe, with the AI Singularity is the
>automatic assumptions that a) desktop AIs will only design smarter
>desktop AIs, rather than, say, smarter human augmentation technologies,
>and b) that humans will only want to design smarter desktop AIs, rather
>than, say, smarter human augmentation technologies. I think the trend
>toward wearables and more powerful mobile computing clearly
>demonstrates that people want tools that make THEM smarter, not tools
>that are smarter than them. Additionally, there is a common
>Singulatarian mistake that upgrades just automagically happen, which is
>wrong. Humans have to choose to upgrade their machines, have to order
>them, have them shipped, installed, etc. The idea of the AI magically
>getting out of the control of its humans is ludicrous. Even if an AI is
>able to use a corporate persona to order things, it will still take
>employees, managers, and a board of directors to allow it to happen and
>make it happen. Even then, there is always the electrical cord to
>unplug to send a truculent AI 'to its room'.
>
>  
>
Just like we can unplug computers now.
Except we had better replug them pretty quickly unless we want our 
company/whole world to collapse.
AIs will self improve for the same reasons computers get upgraded with 
h/w and s/w - because if we don't our competitors will.
Sure we can pull the plug, but only if we are willing to return to the 
Victorian Era shortly after burying the billions who would starve to 
death or die in the resulting wars.

-- 
Dirk

The Consensus:-
The political party for the new millenium
http://www.theconsensus.org



-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.323 / Virus Database: 267.6.9 - Release Date: 11/06/2005




More information about the extropy-chat mailing list