[ExI] Hard Takeoff

Samantha Atkins sjatkins at mac.com
Fri Nov 19 17:38:35 UTC 2010


On Nov 18, 2010, at 11:38 AM, spike wrote:

> ... On Behalf Of Samantha Atkins
> ...
>> ...There is sound argument that we are not the pinnacle of possible
> intelligence.  But that that is so does not at all imply or support that AGI
> will FOOM to godlike status in an extremely short time once it reaches human
> level (days to a few years tops)...- s 
> 
> Ja, but there are reasons to think it will.  Eliezer described the hard
> takeoff as analogous to a phase change.  That analogy has its merits.

If it claims the above as the most likely outcome then it doesn't have merits enough for that.  Doing an analogy to various speed up steps in history is insufficient to make the case.  It is suggestive but not sufficient.   I am of course familiar with those arguments.


>  If
> you look at the progress of Cro Magnon man, we have been in our current form
> for about 35,000 years.  Had we had the right tools and infrastructure, we
> could have had everything we have today, with people 35,000 years ago.  But
> we didn't have that.  We gradually accumulated this piece and that piece,
> painfully slowly, sometimes losing pieces, going down erroneous paths.  But
> eventually we accumulated infrastructure, putting more and more pieces in
> place.  Now technology has exploded in the last 1 percent of that time, and
> the really cool stuff has happened in our lifetimes, the last tenth of a
> percent.  We have accumulated critical masses in so many critical areas.
> 
> Second: we now have a vision of what will happen, and a vague notion of the
> path (we think.)

Actually, we don't have that clear a vision.  This is something we should admit. 

> 
> Third: programming is right at the upper limit of human capability.
> Interesting way to look at it, ja?

I have been saying this for a while now.  Without software writing software software is extremely unlikely to advance much.  We write code that we are capable of understanding and maybe maintaining.  Hell, much of the time we are instructed to write code that we expect to be maintained by someone less intelligent than we ourselves.  


>  But think it over: it is actually only a
> fraction of humanity that is capable of writing code at all.  Most of us
> here have at one time or another taken on a programming task, only to
> eventually fail, finding it a bit beyond our coding capabilities.  

I haven't found one of those yet.  Tasks where I cannot find a viable way to end with what I hoped for as quickly as I hoped, yes.  Problems with no known solution, yes.    Problem areas needing new unknown approaches and breakthroughs, yes.  Problems that can't be addressed with language BlubX, yes. 

But I certainly do find myself pushing the edge of what I can think about, of how much I can wrap my head around effectively again and again.   I guess that is part of what you mean. 

> But if we
> were to achieve a human level AGI, then that AGI could replicate itself
> arbitrarily many times, it could form a team to create a program smarter
> than itself, which could then replicate, rinse, repeat, until all available
> resources in that machine are fully and optimally utilized.
> 

a) replication depends on unit cost  and next unit ROI;
b) it is very unlikely that one machine is going to support multiple AGIs anytime soon.  What kind of architecture would allow this without major machine resource contention?  Thinking of running full AGIs in VServer instances?

> Whether that process takes a few hours, a few weeks, a few years, it doesn't
> matter, for most of that process would happen in the last few minutes.
> 

Of what process precisely?

> Given the above, I must conclude that recursive self-improving software will
> optimize itself.  I am far less sure that it will give a damn what we want.

I agree with that much.  I don't agree that it has no effective resource or cost constrains limiting how fast it does so.

- s



More information about the extropy-chat mailing list