[ExI] Hard Takeoff

Samantha Atkins sjatkins at mac.com
Sun Nov 21 21:20:08 UTC 2010


On Nov 21, 2010, at 10:15 AM, The Avantguardian wrote:
>  
> Software optimizing itself to its full existing potential is one thing 
> but reseting its existing potential to an even higher potential is unlikely on 
> such a short time scale. Too much innovation would be required especially for 
> such a 
> 
> 
> short time period. It would have to resort to experimental trial and error just 
> like natural evolution or a human designer would although perhaps much faster.

Not really.  First of all, there is only what is running or is available to run now and what may be run later and may or may not be in runnable condition now.  In running code now we can divide out production code, that which is doing the systems main work currently from somewhat sandboxed experimental code.  We can also have entire systems whose only purpose is to analyze code from the bottom to top looking for possible ways to improve it.  In other words it is not one monolithic code body but a community of code and software specialists that are part of the process of self-improvement.  Given clean boundaries in functional or other styles each single module can be improved and changed and plugged back into production over time. 

>  
> There seems to be an obssessive focus on intelligence in these in these 
> discussions and intelligence is not well defined. Furthermore various aspects of 
> 
> 
> cognition such as intelligence, knowledge, creativity, and wisdom are not all 
> the same thing.
>  

Whether we are intelligent enough to define intelligence to our satisfaction or not it is undeniable that it exists and is critically important to our survival and continuing wellbeing and progress toward our dreams.


> Even if an AI had a 10,000 IQ *and* the sum total of all human knowledge, it 
> would still have to resort to experimentation to answer some questions. And even 
> 
> 
> then it might still be stymied. Having a 10,000 IQ does not let you magically 
> know the color of an extinct dinosaur's skin for example or what lies beneath 
> the ice of Europa. You could make guesses about these things but you certainly 
> wouldn't be infallible.
> 

Well sure.  At any finite point in the intelligence domain there will be some problems questions that are beyond current abilities.   Intelligence is not magic but neither should it be denigrated precisely because it does not confer some magical omniscience.

>>> I have some questions, perhaps naive, regarding the feasibility of the hard 
>>> takeoff scenario: Is self-improvement really possible for a computer program? 
>> 
>> 
>> Certainly.  Some such programs that search for better algorithms in delimited 
>> spaces exist now.  Programs that re-tune to more optimal configuration for 
>> current context also exist.  
>> 

>> 
> Yes, but the programs you refer to simply optimize such factors as run speed or 
> memory usage while performing the exact same functions for which they were 
> originally written albeit more efficiently.

How fast a given bit of work can be done is very much a part of that somewhat nebulous thing we call intelligence.  And the space of self-improving software today is also bigger than that.  Different algorithms are plugged in depending on current context, parts of the code scaffolding they are plugged into is re-arranged.  We have research systems that go much further.  Like the Synthesis OS that rewrites OS kernel operations to avoid wait states and conflicts on the fly or the work of Moshe Looks evolving more optimal function on the fly.    

What do you mean by "same function" though?  If you mean same logical function then sure.  Since accomplishing that function most efficiently is the very criteria of success for such a local optimization.  But there is an entire hierarchy of functions composed of other functions and optimization can happen at any/all of those levels.

>  
> For example, metamorphic computer viruses can change their own code but usually 
> by adding a bunch of non-functional code to themselves to change their 
> signatures. In biology-speak, by introducing "silent mutations" in their code.
>  
> Other programs such as genetic algorithms don't change their actual code 
> structure but simply optimize a well-defined set of parameters/variables that 
> are operated on by the existing code structure to optimize the "fitness" of 
> those parameters. The genetic algorithm does not evolve itself but 
> evolves something else.

That depends.  There is use of GA techniques to turn out new code not just optimize on a very narrow scale we have known how to do with non-AI and non-GA techniques for some time.   What a GA involves, its payload if you will, can be anything your like your can create a reasonable mutation function and and workable fitness function for - including any arbitrary code. 


>  
> Furthermore since most mutations are liable to be as detrimental to 
> computer code as they are to the genetic code, I don't see any AI taking itself 
> from beta test to version 1000.0 overnight.

GA techniques are not the only techniques it will use.  And remember, you are starting at human genius level for this thought experiment.  Suppose such as system devotes 24/7 to becoming an AGI computer scientist.   Then it can do any sort of optimization and exploration than a human of the same abilities.  Except that it has several advantages over humans of similar training and ability.  

>  
>>> On the other hand, if the seed AI is able to actually rewrite the code of 
>> it's 
>> 
>>> intelligence function to non-recursively improve itself, how would it avoid 
>>> falling victim to the halting roblem? 
>> 
>> Why is halting important to continuous improvement?
> 
> Let me try to explain what I mean by this. The only viable strategy for this 
> self improvement is if, as you and others have suggested, an AI copies itself 
> and some copies modify the code of other copies. Now the instances of the AI 
> that are doing the modifying cannot predict the results of their modifications 
> because of the halting problem which is very well defined elsewhere. Thus they 
> must *experiment* on their brethren.

Actually, I did not suggest the entire AI is copied and some copies modify themselves or others.  I cleared this up above in this message.  You are not seeing the situation clearly in this paragraph.

>  
> And if an AI being modified gets stuck in an infinite loop, the only way to 
> "fix" it is to shut it off, effectively killing that copy.

Continuous self-improvement by definition means there is no final state.  But this does not mean a self-improving entity is "stuck" and needs rebooting.

<rest snipped as based on misunderstanding>



>  
> If the AI lacked a sense of self-preservation, it probably wouldn't pose a 
> threat to us humans no matter how smart it became because, if it got out of 
> hand, someone could just waltz right in and flip its switch without meeting any 
> resistance. Of course, it seems odd that an AI that lacked the will to live 
> would still have the will to power, which is what self-improvement implies. 

Anthropomorphic assumption or close to it.  If the AGI has the goal to do XYZ and is not complete and has no other greater goal to submit to termination and is sufficiently intelligence and has actuators to work for its continued existence then it will do so.  But having any of these not be the case does not mean that it cannot harm humanity.  That depends on what actuators, what abilities to do what actions, it does have or can develop.   It also depends on the impact of its very existence and use in various contexts on human psychology and institutions.

> 
>  
> Assuming, however, that it did want to improve itself for whatever reason, the 
> process should be self-limiting because as soon as a sense of 
> self-preservation is one of the improvements that the AI experimenters build 
> into the AI upgrades, the process of self-improvement would stop at that point, 
> because the "selfish" AIs would no longer allow themselves to be guinea pigs for 
> 

Self is a fluid shifting thing not ossification.  Your argument does not follow.

- samantha




More information about the extropy-chat mailing list