On 5/16/06, <b class="gmail_sendername">Eliezer S. Yudkowsky</b> <<a href="mailto:sentience@pobox.com">sentience@pobox.com</a>> wrote:<div><span class="gmail_quote"></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
It is rather reminiscent of someone lecturing me on how, if I don't<br>believe in Christ, Christ will damn me to hell. But Christians have at<br>least the excuse of being around numerous other people who all believe<br>exactly the same thing, so that they are no longer capable of noticing
<br>the dependency on their assumptions, or of properly comprehending that<br>another might share their assumptions.</blockquote><div><br>
*ahem* Given that in both cases I'm the one _failing_ to believe in the
imminent manifestation of deity on Earth via never-observed processes,
I think your analogy has one of those "minus sign switched with plus
sign" bugs ;)<br>
</div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">And you, Russell, need not list any of the negative consequences of<br>believing in a hard takeoff when it doesn't happen, to convince me that
<br>it would be well not to believe in it supposing it doesn't happen.<br>
</blockquote></div><br>
Then perhaps what I need to do is explain why it won't happen? Okay, to make sure we're on the same page, a definition:<br>
<br>
Hard takeoff = a process by which seed AI (of complexity buildable by a
team of very smart humans in a basement) undertakes self-improvement to
increase its intelligence, without needing mole quantities of
computronium, _without needing to interact with the real world outside
its basement_, and without needing a long time; subsequently emerging
in a form that is already superintelligent.<br>
<br>
1) I've underlined the most important condition, because it exposes the
critical flaw: the concept of hard takeoff assumes "intelligence" is a
formal property of a collection of bits. It isn't. It's an informal
comment on the relationship between a collection of bits and its
environment.<br>
<br>
Suppose your AI comes up with a new, supposedly more intelligent
version of itself and is trying to decide whether to replace the
current version with the new version. What algorithm does it use to
tell it whether the new version is really more intelligent? Well it
could apply some sort of mathematical criterion, see whether the new
version is faster at proving a list of theorems, say.<br>
<br>
But the most you will ever get out of that process is an AI that's
intelligent at proving mathematical theorems - not because you didn't
apply it well enough, but because that's all the process was ever
trying to give you in the first place. To create an AI that's
intelligent at solving real world problems - one that can cure cancer
or design a fusion reactor that'll actually work when it's turned on -
requires that the criterion for checking whether a new version is
really more intelligent than the old one, involves testing its
effectiveness at solving real world problems. Which means the process
of AI development must involve interaction with the real world, and
must be limited in speed by real world events even if you have a
building full of nanocomputers to run the software.<br>
<br>
There are other problems, mind you:<br>
<br>
2) Recursive self-improvement mightn't be a valid concept in the first
place. If you think about it, our reason for confidence in the
possibility of AI and nanotechnology is that we have - we are -
existence proofs. There is no existence proof for full RSI. On the
contrary, all the data goes the other way: every successful complex
system we see, from molecular biology to psychology, law and politics,
computers and the Internet etc, are designed as a stack of layers where
the upper layers are easy to change, but they rest on more fundamental
lower ones that typically are changed only by action of some still more
fundamental environment.<br>
<br>
And this makes sense from a theoretical viewpoint. Suppose you're
thinking about replacing the current version of yourself with a new
version. How do you know the new version doesn't have a fatal flaw
that'll manifest itself a year from now? Even if not one that makes you
drop dead, one that might slightly degrade long-term performance;
adopting a long string of such changes could be slow suicide. There's
no way to mathematically prove this won't happen. Godel, Turing, Rice
et al show that in general you can't prove properties of a computer
program even in vacuum, let alone in the case where it must deal with a
real world for which we have no probability distribution. The layered
approach is a pragmatic way out of this, a way to experiment at the
upper levels while safeguarding the fundamental stuff; and that's what
complex systems in real life do.<br>
<br>
The matter has been put to the test: Eurisko was a full RSI <br>
AI. Result: it needed constant human guidance or it'd just wedge itself.<br>
<br>
3) On to the most trivial matter, computing power: there's been a lot
of discussion about how much it might take to run a fully developed and
optimized AI, and the consensus is that ultimate silicon technology
might suffice... but that line of argument ignores the fact that it
takes much more computing power to develop an algorithm than it does to
run it once developed. Consider how long it takes you to solve a
quadratic equation, compared to the millennia required for all of human
civilization to discover algebra. Or consider how long it takes to boot
up Windows or Linux versus how long it took to write them (let alone
develop the concept of operating systems)... and now remember that the
writing of those operating systems involved large distributed systems
of molecular nanocomputers vastly more powerful than the trivial
silicon on which they run.<br>
<br>
Bottom line, I think we'll need molecular computers to develop AI - and not small prototype ones either.<br>
<br>
So that's one reason why the concept is definitely incoherent, an
independent reason why there's good reason to believe it's incoherent,
and a third again independent reason why it wouldn't be practical in
the foreseeable future even if it were coherent.<br>