[ExI] Hard Takeoff

Michael Anissimov michaelanissimov at gmail.com
Tue Nov 16 02:33:28 UTC 2010


Hi John,

On Sun, Nov 14, 2010 at 9:27 PM, John Grigg <possiblepaths2050 at gmail.com>wrote:
>
>
> I agree that self-improving AGI with access to advanced manufacturing
> and research facilities would probably be able to bootstrap itself at
> an exponential rate, rather than the speed at which humans created it
> in the first place.  But the "classic scenario" where this happens
> within minutes, hours or even days and months seems very doubtful in
> my view.
>
> Am I missing something here?


MNT and merely human-equivalent AI that can copy itself but not
qualitatively enhance its intelligence beyond the human level is enough for
a hard takeoff within a few weeks, most likely, if you take the assumptions
in the Phoenix nanofactory paper.

Add in the possibility of qualitative intelligence enhancement and you get
somewhere even faster.

Neocortex expanded in size by a factor of only about 4 from chimps to
produce human intelligence.  The basic underlying design is much the same.
 Imagine if expanding neocortex by a similar factor again led to a similar
qualitative increase in intelligence.  If that were so, then even a thousand
AIs with so-expanded brains and a sophisticated manufacturing base would be
like a group of 1000 humans with assault rifles and helicopters in a world
of six billion chimps.  If that were the case, then the Phoenix nanofactory
+ human-level AI-based estimate might be excessively conservative.

-- 
michael.anissimov at singinst.org
Singularity Institute
Media Director
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101115/a5834922/attachment.html>


More information about the extropy-chat mailing list