[ExI] Hard Takeoff

spike spike66 at att.net
Thu Nov 18 19:38:54 UTC 2010


... On Behalf Of Samantha Atkins
...
>...There is sound argument that we are not the pinnacle of possible
intelligence.  But that that is so does not at all imply or support that AGI
will FOOM to godlike status in an extremely short time once it reaches human
level (days to a few years tops)...- s 

Ja, but there are reasons to think it will.  Eliezer described the hard
takeoff as analogous to a phase change.  That analogy has its merits.  If
you look at the progress of Cro Magnon man, we have been in our current form
for about 35,000 years.  Had we had the right tools and infrastructure, we
could have had everything we have today, with people 35,000 years ago.  But
we didn't have that.  We gradually accumulated this piece and that piece,
painfully slowly, sometimes losing pieces, going down erroneous paths.  But
eventually we accumulated infrastructure, putting more and more pieces in
place.  Now technology has exploded in the last 1 percent of that time, and
the really cool stuff has happened in our lifetimes, the last tenth of a
percent.  We have accumulated critical masses in so many critical areas.

Second: we now have a vision of what will happen, and a vague notion of the
path (we think.)

Third: programming is right at the upper limit of human capability.
Interesting way to look at it, ja?  But think it over: it is actually only a
fraction of humanity that is capable of writing code at all.  Most of us
here have at one time or another taken on a programming task, only to
eventually fail, finding it a bit beyond our coding capabilities.  But if we
were to achieve a human level AGI, then that AGI could replicate itself
arbitrarily many times, it could form a team to create a program smarter
than itself, which could then replicate, rinse, repeat, until all available
resources in that machine are fully and optimally utilized.

Whether that process takes a few hours, a few weeks, a few years, it doesn't
matter, for most of that process would happen in the last few minutes.

Given the above, I must conclude that recursive self-improving software will
optimize itself.  I am far less sure that it will give a damn what we want.

spike







More information about the extropy-chat mailing list