[extropy-chat] The emergence of AI

Eliezer Yudkowsky sentience at pobox.com
Fri Dec 3 22:49:22 UTC 2004


Hal Finney wrote:
> My guess is that AI will indeed emerge gradually.  Even the fans of
> self-improving AI systems may agree that before the AI can start making
> a significant contribution to improving itself, it must attain human
> level competence in at least such fields as programming and AI.

Not so.  Human competence isn't a level, it's an idiosyncratic flavor.  And
if one chose the theory (un)wisely, the spark of recursive self-improvement
might begin at a level far short of human.  Consider that mere natural 
selection was sufficient to give rise to human intelligence.

> This accomplishment must occur without the help of the AI itself

AI help is not binary, all-or-nothing.  It's a growing degree of assistance.

> and would seem to be a minimum before bootstrapping and exponential
> takeoff could hope to occur.
> 
> Yet achieving this goal will be an amazing milestone with repurcussions 
> all through society, even if the self-improvement never works.  And as
> we approach this goal everyone will be aware of it, and of the new
> presence of human-level capability in machines.

In theory, SI can pop up with little or nothing in the way of visible 
commercial spinoffs from the lead AGI project.  In practice this may well 
be the case.

> Given such a trajectory, I suspect that we will see regulation of AI as 
> a threatening technology, following the patterns of biotech regulation 
> and the incipient nanotech regulation.

The Singularity Institute has had great success in getting people, both 
ordinary folks and AGI researchers, to spend the 15 seconds necessary to 
think up an excuse why they needn't bother to do anything inconvenient. 
This holds true whether the inconvenient part is thinking about FAI or just 
thinking about AI at all; that which is no fun is not done.  If AI has a 
high enough profile, we could see millions or even billions of people 
taking 15 seconds to think up an excuse for not paying attention.

To understand the way the world works, consider cryonics.  Death was 
defeated in the 1970s.  No one cared because cryonics sounded sort of 
weird.  People don't need to search very hard for excuses not to think, if 
they must satisfy only themselves.

Human-level AI sounds weird, ergo no one will care until after it happens. 
  Human-level AI will happen for around 30 seconds before the AI zips past 
human level.  After that it will be too late.

The matter of the Singularity will be settled in brief crystal moments, the 
threatening blade of extinction and the attempted parry of FAI.  The last 
desperate battle will be conducted in its entirety by a small handful of 
programmers.  The war will be won by deathly cowardice or lost without a 
fight by well-meaning bravery, on the battlefield of a brain in a box in a 
basement somewhere.  The world will find out after it's over, if any 
survive.  I do not know the future, but that is what I would guess.

--
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list