[ExI] Hard Takeoff

The Avantguardian avantguardian2020 at yahoo.com
Sun Nov 21 18:15:58 UTC 2010


> > 
> > Michael Anissimov writes:
> > 
> > We have real, evidence-based arguments for an abrupt takeoff.  One is that 
>the 
>
> > human speed and quality of thinking is not necessarily any sort of optimal 
> > thing, thus we shouldn't be shocked if another intelligent species can easily 
>
>
>
>
> > surpass us as we surpassed others.  We deserve a real debate, not accusations 
>
>
>
>of 
>
> > 
> > monotheism.

Samantha writes:
 
> There is sound argument that we are not the pinnacle of possible intelligence.  
>
>
>
>But that that is so does not at all imply or support that AGI will FOOM to 
>godlike status in an extremely short time once it reaches human level (days to a 
>
>
>
>few years tops).
> 
> > ------------------------------

Samantha, this sounds like you agree with me about my general criticism of hard 
takeoff, even if you have quibbles about the specifics of my argument.
 
Software optimizing itself to its full existing potential is one thing 
but reseting its existing potential to an even higher potential is unlikely on 
such a short time scale. Too much innovation would be required especially for 
such a 


short time period. It would have to resort to experimental trial and error just 
like natural evolution or a human designer would although perhaps much faster.
 
There seems to be an obssessive focus on intelligence in these in these 
discussions and intelligence is not well defined. Furthermore various aspects of 


cognition such as intelligence, knowledge, creativity, and wisdom are not all 
the same thing.
 
Even if an AI had a 10,000 IQ *and* the sum total of all human knowledge, it 
would still have to resort to experimentation to answer some questions. And even 


then it might still be stymied. Having a 10,000 IQ does not let you magically 
know the color of an extinct dinosaur's skin for example or what lies beneath 
the ice of Europa. You could make guesses about these things but you certainly 
wouldn't be infallible.

The greatest intelligence can still remain ignorant about things it cannot 
access data about. And if you simply let the AI spider the Internet for its 
knowledge-base, you are liable to get an AI that has the spelling abilities of a 


third-grader, believes a host of urban myths, and utilizes its "god-like 
power" to spam people about penis-enlargement products.
 
> > I have some questions, perhaps naive, regarding the feasibility of the hard 
> > takeoff scenario: Is self-improvement really possible for a computer program? 
>
> 
> Certainly.  Some such programs that search for better algorithms in delimited 
>spaces exist now.  Programs that re-tune to more optimal configuration for 
>current context also exist.  
>
> 
Yes, but the programs you refer to simply optimize such factors as run speed or 
memory usage while performing the exact same functions for which they were 
originally written albeit more efficiently.
 
For example, metamorphic computer viruses can change their own code but usually 
by adding a bunch of non-functional code to themselves to change their 
signatures. In biology-speak, by introducing "silent mutations" in their code.
 
Other programs such as genetic algorithms don't change their actual code 
structure but simply optimize a well-defined set of parameters/variables that 
are operated on by the existing code structure to optimize the "fitness" of 
those parameters. The genetic algorithm does not evolve itself but 
evolves something else.
 
Furthermore since most mutations are liable to be as detrimental to 
computer code as they are to the genetic code, I don't see any AI taking itself 
from beta test to version 1000.0 overnight.
 
> > 
> > If this "improvement" is truly recursive, then that implies that it iterates 



>a 
>
> > function with the output of the function call being the input for the next 
> > identical function call.
> 
> Adaptive loop is a bit longer than a single function call usually.  You are 
>mixing "function" in the generic sense of a process with goals and a definable 
>fitness function (measure of efficacy for those goals) with function as a single 
>
>
>
>software function.  Some functions (which may be composed of many many 2nd type 

>functions) evaluated the efficacy and explore for improvements of other 
>functions. 

Admittedly I abused the word "function" but it was in response to the abuse of 
the word "recursive" which has a precise mathematical definition. I have to 
admit 

that your "adaptive loop" is a much better description than "recursive 
self-improvement".
 
> > On the other hand, if the seed AI is able to actually rewrite the code of 
>it's 
>
> > intelligence function to non-recursively improve itself, how would it avoid 
> > falling victim to the halting roblem? 
> 
> Why is halting important to continuous improvement?

Let me try to explain what I mean by this. The only viable strategy for this 
self improvement is if, as you and others have suggested, an AI copies itself 
and some copies modify the code of other copies. Now the instances of the AI 
that are doing the modifying cannot predict the results of their modifications 
because of the halting problem which is very well defined elsewhere. Thus they 
must *experiment* on their brethren.
 
And if an AI being modified gets stuck in an infinite loop, the only way to 
"fix" it is to shut it off, effectively killing that copy. So in order to make 
significant progress which would entail the death of thousands or millions of 
experimental copies, the AI doing the experiment would have to be completely 
lacking in empathy for its own copies. And the AIs that are the experimental 
subjects would have to be completely altruistic and lacking in any sense of 
self-preservation.
 
If the AI lacked a sense of self-preservation, it probably wouldn't pose a 
threat to us humans no matter how smart it became because, if it got out of 
hand, someone could just waltz right in and flip its switch without meeting any 
resistance. Of course, it seems odd that an AI that lacked the will to live 
would still have the will to power, which is what self-improvement implies. 

 
Assuming, however, that it did want to improve itself for whatever reason, the 
process should be self-limiting because as soon as a sense of 
self-preservation is one of the improvements that the AI experimenters build 
into the AI upgrades, the process of self-improvement would stop at that point, 
because the "selfish" AIs would no longer allow themselves to be guinea pigs for 

their fellows.
 
Stuart LaForge 

"There is nothing wrong with America that faith, love of freedom, intelligence, 
and energy of her citizens cannot cure." - Dwight D. Eisenhower 



      




More information about the extropy-chat mailing list