[extropy-chat] The emergence of AI
Eliezer Yudkowsky
sentience at pobox.com
Sat Dec 4 01:31:08 UTC 2004
Hal Finney wrote:
> Eliezer writes:
>
>>Hal Finney wrote:
>>
>>>My guess is that AI will indeed emerge gradually. Even the fans of
>>>self-improving AI systems may agree that before the AI can start making
>>>a significant contribution to improving itself, it must attain human
>>>level competence in at least such fields as programming and AI.
>>
>>Not so. Human competence isn't a level, it's an idiosyncratic flavor.
>
> What is an idiosyncratic flavor?
I mean that "human competence" at programming isn't a level like 83.4, it's
a set of very weird and unusual things that humans do, at the end of which
one finds a program that could almost certainly have been attained through
far more direct and efficient means, plus it wouldn't have all the bugs.
>>And
>>if one chose the theory (un)wisely, the spark of recursive self-improvement
>>might begin at a level far short of human. Consider that mere natural
>>selection was sufficient to give rise to human intelligence.
>
> Yes, natural selection gave rise to human intelligence, but only by an
> exceedingly slow and roundabout path. And there are some who suggest
> that it was almost infinitely unlikely. See
> http://hanson.gmu.edu/greatfilter.html and
> http://hanson.gmu.edu/hardstep.pdf .
But are the hard steps points along the evolutionary trajectory? And are
the hard steps things like: "First genetic code happens to have 64 codons
which will eventually end up with 20 amino acids"?
> Presumably any effort to develop AI will not work by such a haphazard
> method, but will involve skill and effort devoted towards a specific goal.
Indeed so.
> The record of many failed projects makes clear that creating AI is a
> tremendously difficult task for beings of merely human intelligence.
No; they didn't know what they were doing. It is not that they knew
exactly what they were doing, and failed anyway, because even with perfect
theoretical understanding the pragmatic problem was too hard. They threw
themselves at the problem without a clue and went splat. Every scientific
problem is unsolved for thousands of years before it is solved; chemistry,
astronomy. You cannot conclude from present difficulties that it is
impossible for a 99th-percentile geek in Mudville, Idaho to create an AI on
his home computer, if the state of knowledge of Artificial Intelligence had
reached the maturity of today's chemistry or astronomy.
> I don't see how an AI with a competence level far short of human at tasks
> such as programming or designing AI systems could be of significant help.
Even having a "compiler" is a hugely significant help.
>>Human-level AI sounds weird, ergo no one will care until after it happens.
>>Human-level AI will happen for around 30 seconds before the AI zips past
>>human level. After that it will be too late.
>
> Are you serious? 30 seconds, once the AI reaches human level? What on
> earth could yet another human-level contributor to the team accomplish
> in that time?
"Human level" is an idiosyncratic flavor. I am talking about an AI that is
passing through "roughly humanish breadth of generally applicable
intelligence if not to humanish things". At this time I expect the AI to
be smack dab in the middle of the hard takeoff, and already writing code at
AI timescales.
>>The matter of the Singularity will be settled in brief crystal moments, the
>>threatening blade of extinction and the attempted parry of FAI. The last
>>desperate battle will be conducted in its entirety by a small handful of
>>programmers. The war will be won by deathly cowardice or lost without a
>>fight by well-meaning bravery, on the battlefield of a brain in a box in a
>>basement somewhere. The world will find out after it's over, if any
>>survive. I do not know the future, but that is what I would guess.
>
> I don't see how it can happen so quickly. I envision a team with several
> key members and an AI, where the AI gradually begins making a useful
> contribution of its own. Eventually it becomes so capable that it is
> doing more than the rest of the team, and from that point its competence
> could, conceivably, grow exponentially. But I don't see any reason why
> this process would go as fast as you describe.
Because the AI is nothing remotely like the other team members. It is not
like an extra coder on the project. You can't even view it as a separate
contributor to itself. The AI's competence defines the pattern of the AI.
When the AI's competence increases, the whole pattern - implementation,
if perhaps not structure - of the AI changes. You can't expect that the AI
will sit down and try to rewrite a module. The more relevant capabilities,
among those the AI possesses at a given time, are those which operate on a
timescale permitting them to be applied to the entire AI at once. Slower
capabilities would be used to rewrite global ones, or forked and
distributed onto more hardware, or used to rewrite themselves to a speed
where they become globally applicable. The AI is not like a human. If you
visualize a set of modules yielding capabilities that turn back and rewrite
the modules, and play with the possibilities, you will not get any slow
curves out of it. You will get sharp breakthroughs and bottlenecks a human
has to overcome, and a final sharp breakthrough that carries the AI to
nanotech and beyond. An FAI researcher might deliberately choose to slow
down that final breakthrough. AGI researchers seem uninterested in doing
so, though most are willing to invest 15 seconds in thinking up a reason
why they need not care right now. (Sometimes it's okay to care in the
indefinite future, just never this minute.)
The more you know about AI, the less anthropomorphic your expectations will
be, because a novice must imagine by analogy to humans with strange mental
modifications, rather than rethinking the nature of mind (recursively
self-improving optimization processes) from scratch. The interaction
between the AI rewriting itself and the programmers poking and prodding the
AI from outside will not resemble adding a human to the team. The
trajectory of the AI will not be a comfortably slow and steady timescale.
The AI will never be humanishly intelligent, and if you pick an arbitrary
metric of intelligence the AI will be "human-level" for around thirty
seconds in the middle of the final hard takeoff. (Barring a deliberate
slowdown by FAI programmers; we speak now of the "natural" character of the
trajectory, and the security risk.) Expect the AI's self-programming
abilities to resemble not at all the slow grinding of unreliable human
metaphor. The faster the AI's abilities operate relative to human ones,
the less arbitrarily defined "intelligence" the optimization process needs
to spark a takeoff. This would lower the critical threshold below human
intelligence, even leaving out the advantages of AI, the ability to run
thousands of different cognitive threads on distributed hardware, and above
all the recursive nature of self-improvement (which, this point has to keep
on being hammered home, is absolutely unlike anything in human experience).
A lot of this is in section III of LOGI. Consider rereading it.
http://singinst.org/LOGI/.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat
mailing list