[ExI] Hard Takeoff

Samantha Atkins sjatkins at mac.com
Tue Nov 16 07:00:43 UTC 2010


On Nov 15, 2010, at 6:33 PM, Michael Anissimov wrote:

> Hi John,
> 
> On Sun, Nov 14, 2010 at 9:27 PM, John Grigg <possiblepaths2050 at gmail.com> wrote:
> 
> I agree that self-improving AGI with access to advanced manufacturing
> and research facilities would probably be able to bootstrap itself at
> an exponential rate, rather than the speed at which humans created it
> in the first place.  But the "classic scenario" where this happens
> within minutes, hours or even days and months seems very doubtful in
> my view.
> 
> Am I missing something here?
> 
> MNT and merely human-equivalent AI that can copy itself but not qualitatively enhance its intelligence beyond the human level is enough for a hard takeoff within a few weeks, most likely, if you take the assumptions in the Phoenix nanofactory paper.  

MNT is of course not near term at all.  The latest guesstimates I saw by Drexler, Freitas and Merkle put it a good three decades out.   So if we get HAI before that it is likely to be expensive and not at all easy for it to quickly upgrade itself.
A few very expensive human equivalent AGIs will not be very revolutionary quickly.

> 
> Add in the possibility of qualitative intelligence enhancement and you get somewhere even faster.  
> 

Too many IF bridges need to be crossed between here and there for the argument to be too compelling.  Possible, yes. Likely within three to four decades, not so much. 


> Neocortex expanded in size by a factor of only about 4 from chimps to produce human intelligence.  The basic underlying design is much the same.  Imagine if expanding neocortex by a similar factor again led to a similar qualitative increase in intelligence.

I am not at all sure that would be possible with current human brain size and brain architecture.  But then I don't take well to strained analogies.

>  If that were so, then even a thousand AIs with so-expanded brains and a sophisticated manufacturing base would be like a group of 1000 humans with assault rifles and helicopters in a world of six billion chimps.

Even more strained!  :)  Where are you going to get a thousand human level AGIs?  Using what assumption on hardware and energy requirements?  

>  If that were the case, then the Phoenix nanofactory + human-level AI-based estimate might be excessively conservative.   

For sometime decades hence maybe.  But it isn't a serious existential risk now.   Economic collapse is a very serious risk in this coming decade.  Energy and resource crises are close behind.    Those could result in losing a substantial part of our technological/scientific infrastructure *before* MNT or AGI can be developed.  If we do then the argument is strong that humanity may never recover to the necessary level of infrastructure and resources again.  That would be catastrophic. 


- samantha
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101115/42a49c6b/attachment.html>


More information about the extropy-chat mailing list