[extropy-chat] Bluff and the Darwin award

Russell Wallace russell.wallace at gmail.com
Wed May 17 20:34:15 UTC 2006


On 5/17/06, Samantha Atkins <sjatkins at mac.com> wrote:

> You seem to be confusing Singularity with some particular extremely
> utopian or dystopian outcomes.  I am not using the word in that sense and I
> don't think many others in this thread are either.  I don't expect a deity
> to manifest on Earth as part and parcel of Singularity.
>

Eliezer effectively does though, and it's his view that I was arguing
against here.

To recap, my position is:

1) The Singularity is a fine way of thinking about the distant future, a
state of affairs we may ultimately reach.

2) However, we are not close enough to be able to make meaningful, specific
predictions about questions like what will society be like during and after
the Singularity.

3) Therefore it is not good to turn it into a political football at this
time, because we lack the data to base a political debate on fact and
reason, and such a debate based on conjecture and emotion is likely to be
counterproductive.

There are various memes floating around for versions of the idea that
Singularity can be achieved in a short timescale by currently known
processes. I'm arguing that none of these are realistic; the post you're
replying to here was the one in which I present my reasons for believing
that Eliezer's version involving recursive self-improvement, in particular,
is unrealistic. I do this reluctantly, since when I studied the question of
hard takeoff it was not with a view to disproving it. But I find the
conclusion inescapable nonetheless.

Hard takeoff = a process by which seed AI (of complexity buildable by a team
> of very smart humans in a basement) undertakes self-improvement to increase
> its intelligence, without needing mole quantities of computronium, _without
> needing to interact with the real world outside its basement_, and without
> needing a long time; subsequently emerging in a form that is already
> superintelligent.
>
>
> Who says this will happen with no interaction with the rest of the world
> on the part of the seed AI?
>

Eliezer has said it can so happen, and you yourself support this conjecture
in the next few sentences:

As you know much has been written about the difficulty of keeping the AI
> isolated sufficiently from the world.   But I have no reason to consider
> self-improvement or even really clever hacking utterly insufficient to reach
> super-intelligence in that "basement" with just the corpus of information
> reachable in read-only mode from the internet and reasonably good
> self-improving code.   The latter is imho key if in fact humans are
> incapable of designing the components of a super-intelligence.
>

The concept of "self-improving code" is one of the big pitfalls in reasoning
about this area (the other being the concept of "intelligence" as a formal
property of a collection of bits). Let's stop to dissect it for a bit.

When we think about self-improving code, the image that always comes to our
minds (not just mine, this is true of the examples people post when the idea
is discussed) is that of tweaking code to produce the _same output_ using
fewer CPU cycles or bytes of memory.

I emphasize those words because they are the crux of this issue. Improving
an AI's performance isn't about producing the same output faster, it's about
producing _different output_ from the same input. In other words, it's not
just about changing the code, but changing the _specification_, and that's a
completely different thing. As you know yourself, changing the specification
with a reasonable assurance that the result will be an improvement isn't the
sort of thing that can be done by a smart compiler. It's something that
requires domain (not just programming) expertise, and real-world testing -
which is what I've been saying.

Why do you think computronium is required?   It was not required to get to
> human intelligence obviously.
>

Yes it was. A single human brain contains more than an exaflop's worth of
fault-tolerant self-rewiring nanotech computronium (just looking at the
neurons alone, nevermind the as yet unquantified contributions of the glial
cells, peripheral nervous system and rest of the body). And getting to human
intelligence took a large population of such entities over millions of years
of interaction with and live testing in the real world.

Not being restricted to blind Darwinian evolution, it shouldn't take us
millions of years to create AI; but the requirement for interaction with the
real world isn't going to go away.

1) I've underlined the most important condition, because it exposes the
> critical flaw: the concept of hard takeoff assumes "intelligence" is a
> formal property of a collection of bits. It isn't. It's an informal comment
> on the relationship between a collection of bits and its environment.
>
>
> Huh?  It is you who posited a completely isolated environment that cannot
> be breached.
>

Actually the concept has been floating around over on SL4 since long before
I joined the list; I was just summarizing.

An initial environment does not have to include the entire world in order
> for the intelligence to grow.
>

Sure. That doesn't change the fact that the AI will depend on an environment
and the rate at which it learns will depend on the rate at which it can do
things in that environment. The reason I'm emphasizing this is to refute
Eliezer's idea that the AI can learn at the rate at which transistors switch
between 0 and 1, independent of the real world.

Yes and no.  What is the real world constraint on building and testing
> software?
>

The hard part is the specification: knowing what output your program should
be producing in the first place.

Yes if you want to solve engineering problems you eventually need to get
> beyond the design tools and simulation but these are still highly critical.
> Actually building and testing a new solution can be done by human beings
> from the design the AI came up with.  Where is the crucial problem?
>

Design tools and simulation are critical yes, and actually building and
testing solutions by humans is also critical (I don't expect an early-stage
AI to be able to effectively control robots).

As long as these things are available, there is no crucial problem. Please
bear in mind that I'm not saying strong AI isn't possible - on the contrary,
I believe it is. I'm just listing the requirements for it.

Imagine this is 1906 and there's a debate about whether it's possible to put
a man on the moon, and we have three factions:

1) Skeptics: No way!
2) Russell: Yes, ultimately. But it will take many decades of hard work
designing successively more powerful rockets, building them and - you can't
skip this part - testing them in the real world. There are no easy short
cuts.
3) Eliezer and co: You don't know that. There's no reason why someone
couldn't just launch a successful moon shot from their basement anytime now.

My position is: it can be done, but there are tough practical requirements
that need to be met if one wants a real chance at it. There are no easy
short cuts.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20060517/72163089/attachment.html>


More information about the extropy-chat mailing list