[ExI] Hard Takeoff

Samantha Atkins sjatkins at mac.com
Thu Nov 18 17:18:30 UTC 2010


On Nov 17, 2010, at 9:11 PM, The Avantguardian wrote:

> 
>> From: Michael Anissimov <michaelanissimov at gmail.com>
>> To: ExI chat list <extropy-chat at lists.extropy.org>
>> Sent: Sun, November 14, 2010 9:52:06 AM
>> Subject: [ExI] Hard Takeoff
> 
> Michael Anissimov writes:
> 
> We have real, evidence-based arguments for an abrupt takeoff.  One is that the 
> human speed and quality of thinking is not necessarily any sort of optimal 
> thing, thus we shouldn't be shocked if another intelligent species can easily 
> surpass us as we surpassed others.  We deserve a real debate, not accusations of 
> 
> monotheism.

There is sound argument that we are not the pinnacle of possible intelligence.  But that that is so does not at all imply or support that AGI will FOOM to godlike status in an extremely short time once it reaches human level (days to a few years tops).

> ------------------------------
> 
> I have some questions, perhaps naive, regarding the feasibility of the hard 
> takeoff scenario: Is self-improvement really possible for a computer program? 
> 

Certainly.  Some such programs that search for better algorithms in delimited spaces exist now.   Programs that re-tune to more optimal configuration for current context also exist.  

> 
> If this "improvement" is truly recursive, then that implies that it iterates a 
> function with the output of the function call being the input for the next 
> identical function call.

Adaptive loop is a bit longer than a single function call usually.  You are mixing "function" in the generic sense of a process with goals and a definable fitness function (measure of efficacy for those goals) with function as a single software function.  Some functions (which may be composed of many many 2nd type functions) evaluated the efficacy and explore for improvements of other functions. 


> So the result will simply be more of the same function. 
> And if the initial "intelligence function" is flawed, then all recursive 
> iterations of the function will have the same flaw. So it would not really be 
> qualitatively improving, it would simply be quantitatively increasing. For 
> example, if I had two or even four identical brains, none of them might be able 
> answer this question, although I might be able to do four other mental tasks 
> that I am capable of doing, at once.
> 
> On the other hand, if the seed AI is able to actually rewrite the code of it's 
> intelligence function to non-recursively improve itself, how would it avoid 
> falling victim to the halting roblem? 

Why is halting important to continuous improvement?


> If there is no way, even in principle, to 
> algorithmically determine beforehand whether a given program with a given input 
> will halt or not, would an AI risk getting stuck in an infinite loop by messing 
> with its own programming? The halting problem is only defined for Turing 
> machines so a quantum computer may overcome it, but I am curious if any SIAI 
> people have considered it in their analysis of hard versus soft takeoff.
> 

Nope, because that is not all it is doing.  At any moment it is doing work with its current best working adaptations. 

- s 



More information about the extropy-chat mailing list