[extropy-chat] The emergence of AI

Hal Finney hal at finney.org
Sat Dec 4 00:23:36 UTC 2004


Eliezer writes:
> Hal Finney wrote:
> > My guess is that AI will indeed emerge gradually.  Even the fans of
> > self-improving AI systems may agree that before the AI can start making
> > a significant contribution to improving itself, it must attain human
> > level competence in at least such fields as programming and AI.
>
> Not so.  Human competence isn't a level, it's an idiosyncratic flavor.

What is an idiosyncratic flavor?

> And
> if one chose the theory (un)wisely, the spark of recursive self-improvement
> might begin at a level far short of human.  Consider that mere natural 
> selection was sufficient to give rise to human intelligence.

Yes, natural selection gave rise to human intelligence, but only by an
exceedingly slow and roundabout path.  And there are some who suggest
that it was almost infinitely unlikely.  See
http://hanson.gmu.edu/greatfilter.html and
http://hanson.gmu.edu/hardstep.pdf .

Presumably any effort to develop AI will not work by such a haphazard
method, but will involve skill and effort devoted towards a specific goal.
The record of many failed projects makes clear that creating AI is a
tremendously difficult task for beings of merely human intelligence.
I don't see how an AI with a competence level far short of human at tasks
such as programming or designing AI systems could be of significant help.

> In theory, SI can pop up with little or nothing in the way of visible 
> commercial spinoffs from the lead AGI project.  In practice this may well 
> be the case.

What skills would the fledgling AI have that would contribute materially
to the project in a way that a human could not, but which would not find
commercial value?

> > Given such a trajectory, I suspect that we will see regulation of AI as 
> > a threatening technology, following the patterns of biotech regulation 
> > and the incipient nanotech regulation.
>
> The Singularity Institute has had great success in getting people, both 
> ordinary folks and AGI researchers, to spend the 15 seconds necessary to 
> think up an excuse why they needn't bother to do anything inconvenient. 
> This holds true whether the inconvenient part is thinking about FAI or just 
> thinking about AI at all; that which is no fun is not done.  If AI has a 
> high enough profile, we could see millions or even billions of people 
> taking 15 seconds to think up an excuse for not paying attention.
>
> To understand the way the world works, consider cryonics.  Death was 
> defeated in the 1970s.  No one cared because cryonics sounded sort of 
> weird.  People don't need to search very hard for excuses not to think, if 
> they must satisfy only themselves.

I don't understand the relevance of this to the question of whether AI
will be regulated.

> Human-level AI sounds weird, ergo no one will care until after it happens. 
> Human-level AI will happen for around 30 seconds before the AI zips past 
> human level.  After that it will be too late.

Are you serious?  30 seconds, once the AI reaches human level?  What on
earth could yet another human-level contributor to the team accomplish
in that time?

> The matter of the Singularity will be settled in brief crystal moments, the 
> threatening blade of extinction and the attempted parry of FAI.  The last 
> desperate battle will be conducted in its entirety by a small handful of 
> programmers.  The war will be won by deathly cowardice or lost without a 
> fight by well-meaning bravery, on the battlefield of a brain in a box in a 
> basement somewhere.  The world will find out after it's over, if any 
> survive.  I do not know the future, but that is what I would guess.

I don't see how it can happen so quickly.  I envision a team with several
key members and an AI, where the AI gradually begins making a useful
contribution of its own.  Eventually it becomes so capable that it is
doing more than the rest of the team, and from that point its competence
could, conceivably, grow exponentially.  But I don't see any reason why
this process would go as fast as you describe.

Hal



More information about the extropy-chat mailing list