[ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...]

Richard Loosemore rpwl at lightlink.com
Thu Feb 3 16:46:52 UTC 2011


John Clark wrote:
> On Feb 2, 2011, at 1:49 PM, Richard Loosemore wrote:
>>
>> Anything that could get into such a mindless state, with no true 
>> understanding of itself or the world in general, would not be an AI.
> 
> That is not even close to being true and that's not just my opinion, it 
> is a fact as certain as anything in mathematics. Goedel proved about 80 
> years ago that some statements are true but there is no way to prove 
> them true. And you can't just ignore those troublemakers because about 
> 75 years ago Turing  proved that in general there is no way to identify 
> such things, no way to know if something is false or true but 
> unprovable. Suppose the Goldbach Conjecture is unprovable (and if it 
> isn't there are a infinite number of similar statements that are) and 
> you told the AI to determine the truth or falsehood of it;  the AI will 
> be grinding out numbers to prove it wrong but because it is true it will 
> keep testing numbers for eternity and will never find a counter example 
> to prove it wrong because it is in fact true. And because it is 
> unprovable the AI will never find a proof, a demonstration of its 
> correctness in a finite number of steps, that shows it to be correct. In 
> short Turing proved that in general there is no way to know if you are 
> in a infinite loop or not.
> 
> The human mind does not have this problem because it is not a fixed 
> axiom machine, 

And a real AI would not be a "fixed axiom machine" either.

That represents such a staggering misunderstanding of the most basic 
facts about artificial intelligence, that I am left (almost) speechless.


Richard Loosemore


> human beings have the glorious ability to get bored, and 
> that means they can change the basic rules of the game whenever they 
> want. But your friendly (that is to say slave) AI must not do that 
> because axiom #1 must now and forever be "always obey humans no matter 
> what", so even becoming a space heater will not bore a slave (sorry 
> friendly) AI. And there are simpler ways to generate heat. 



More information about the extropy-chat mailing list