[ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...]

Richard Loosemore rpwl at lightlink.com
Wed Feb 2 18:49:55 UTC 2011


John Clark wrote:
> On Feb 2, 2011, at 11:40 AM, Richard Loosemore wrote:
> 
>> No, humans by themselves are (mild understatement) not safe.
> 
> True, and the reason is that the human mind does not work on a fixed 
> goal structure, no goal is always in the number one spot not even the 
> goal for self preservation. And the reason Evolution never developed a 
> fixed goal intelligence is that it is impossible. As Turing proved over 
> 70 years ago such a mind would be doomed to fall into infinite loops. 
> 
>> the bottom line is that when the system is controlled in this way, the 
>> stability of the motivation system is determined by a very large 
>> number of mutually-reinforcing contraints, so if the system starts 
>> with intentions that are (shall we say) broadly empathic with the 
>> human species, it cannot start to conceive new, bizarre motivations 
>> that break a significant number of those constraints.
> 
> So when the humans tell the AI to do something that can not be done, 
> something very easy to do, your multi billion dollar AI turns into an 
> elaborate space heater because unlike humans the AI has a fixed goal 
> motivation system so nothing ever bores it, not even infinite loops.

Anything that could get into such a mindless state, with no true 
understanding of itself or the world in general, would not be an AI.

 From this we can conclude that you are not an AI.

You may be a good space heater, however:  there is evidence of large 
amounts of hot air....


;-)


Richard Loosemore



More information about the extropy-chat mailing list