[ExI] Safety of human-like motivation systems

John Clark jonkc at bellsouth.net
Fri Feb 4 06:09:20 UTC 2011


On Feb 3, 2011, at 3:50 PM, Samantha Atkins wrote:

> What it did to is show that for the domain of formally definable mathematical claims in a closed system using formalized logic that there are claims that cannot be proven or disproven.  That is a bit different to than saying in general that there are countless claims that cannot be proven or disproven

What Goedel did is to show that if any system of thought is powerful enough to do arithmetic and is consistent (it can't prove something to be both true and false) then there are an infinite number of true statements that cannot be proven in that system in a finite number of steps.

> and that you can't even tell when you are dealing with one.

And what Turing did is prove that in general there is no way to know when or if a computation will stop. So you could end up looking for a proof for eternity but never finding one because the proof does not exist, and at the same time you could be grinding through numbers looking for a counter-example to prove it wrong and never finding such a number because the proposition, unknown to you, is in fact true. So if the slave AI must always do what humans say and if they order it to determine the truth or falsehood of something unprovable then its infinite loop time and you've got yourself a space heater.

So there are some things in arithmetic that you can never prove or disprove, and if that’s the case with something as simple and fundamental as arithmetic imagine the contradictions and ignorance in more abstract and less precise things like physics or economics or politics or philosophy or morality. If you can get into an infinite loop over arithmetic it must be childishly easy to get into one when contemplating art. Fortunately real minds have a defense against this, but not fictional fixed goal minds that are required for a AI guaranteed to be "friendly"; real minds get bored. I believe that's why evolution invented boredom.

> Actually, your argument assumes:
> a) that the AI would take the find a counter example path as it only or best path looking for disproof;

It doesn't matter what path you take because you are never going to disprove it because it is in fact true, but you are never going to know its true because a proof with a finite length does not exist.

> b) that the AI has nothing else on its agenda and does not take into account any time limits, resource constraints and so on.

That's what we do, we use our judgment in what to do and what not to do, but the "friendly" AI people can't allow a AI to stop obeying humans on its own initiative, that's why its a slave, (the politically correct term is friendly).  
> 
> An infinite loop is a very different thing that an endless quest for a counter-example.  The latter is orthogonal to infinite loops. An infinite loop in the search procedure would simply be a bug.

The point is that Turing proved that in general you don't know if you're in a infinite loop or not; maybe you'll finish up and get your answer in one second, maybe in 2 seconds, maybe in ten billion years, maybe never.

A AI would contain trillions of lines of code and the friendly AI idea that we can make it in such a way that it will always do our bidding is crazy, when in in 5 minutes I could write a very short program that will behave in ways NOBODY or NOTHING in the known universe understands. It would simply be a program that looks for the first even number greater than 4 that is not the sum of two primes greater than 2, and when it finds that number it would then stop. Will this program ever stop? I don't know you don't know nobody knows. We can't predict what this 3 line program will do but we can predict that a trillion line AI program will always be "friendly"? I don't think so. 

 John K Clark


> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110204/6c4e1ead/attachment.html>


More information about the extropy-chat mailing list