[ExI] Hard Takeoff-money

Samantha Atkins sjatkins at mac.com
Tue Nov 16 06:36:38 UTC 2010


On Nov 15, 2010, at 7:31 AM, Keith Henson wrote:

> On Mon, Nov 15, 2010 at 5:00 AM,  John Grigg
> <possiblepaths2050 at gmail.com> wrote:
>> 
>> Brent Allsop wrote:
>> I would agree that a copy-able human level AI would launch a take-off,
>> leaving what we have today, to the degree that it is unchanged, in the
>> dust.  But I don't think acheiving this is going to be anything like
>> spontaneous, as you seem to assume is possible.  The rate of progress
>> of intelligence is so painfully slow.   So slow, in fact, that many
>> have accused great old AI folks like Minsky as being completely
>> mistaken.
>>>>> 
>> 
>> Michael Annisimov replied:
>> There's a huge difference between the rate of progress between today
>> and human-level AGI and the time between human-level AGI and
>> superintelligent AGI.  They're completely different questions.  As for
>> a fast rate, would you still be skeptical if the AGI in question had
>> access to advanced molecular manufacturing?
>> 
>> I agree that self-improving AGI with access to advanced manufacturing
>> and research facilities would probably be able to bootstrap itself at
>> an exponential rate, rather than the speed at which humans created it
>> in the first place.  But the "classic scenario" where this happens
>> within minutes, hours or even days and months seems very doubtful in
>> my view.
>> 
>> Am I missing something here?
> 
> What does an AI mainly need?  Processing power and storage.  If there
> are vast amounts of both that can be exploited, then all you need is a
> storage estimate for the AI and the average bandwidth between storage
> locations to determine the replication rate.

But wait.  The first AGIs will likely be ridiculously expensive.  So what if they can copy themselves?  If you can only afford one and they are originally only as competent as a human expert then you will go with entire campuses of human experts until the costs comes down sufficiently - say in a decade or two after the first AGI.  Until then it will not matter much that they are in principle copyable.    Of course if someone cracks the algorithms to have human level AGI on much more modest hardware then we get lots of AGI proliferation much more quickly.

- samantha



More information about the extropy-chat mailing list