[extropy-chat] Bluff and the Darwin award

Heartland velvet977 at hotmail.com
Wed May 17 21:20:00 UTC 2006


Samantha:
> An initial environment does not have to include the entire world in order
>> for the intelligence to grow.
>>

Russell:
> Sure. That doesn't change the fact that the AI will depend on an environment
> and the rate at which it learns will depend on the rate at which it can do
> things in that environment. The reason I'm emphasizing this is to refute
> Eliezer's idea that the AI can learn at the rate at which transistors switch
> between 0 and 1, independent of the real world.

Hard takeoff, as I understand it, doesn't refer to the growing amount of impact 
that intelligence growth will have on the outside environment, but to the growth 
itself. If an AI is capable of making the first improvement to itself, this already 
means that this AI had enough knowledge about its structure and ways of improving 
itself to not seek any extra knowledge outside of its immediate environment.

And even if such AI were required to go outside of its environment to learn how to 
improve itself, a smarter AI should be able to minimize that requirement on each 
iteration. In any case, I don't see how hard takeoff is not inevitable soon after 
the first iteration.

H. 



More information about the extropy-chat mailing list