[ExI] Safety of human-like motivation systems

John Clark jonkc at bellsouth.net
Fri Feb 4 21:25:42 UTC 2011


On Feb 4, 2011, at 3:05 PM, Samantha Atkins wrote:
> 
> Why would an FAI bother looking for such a proof for eternity exactly?

Because a human told it to determine the truth or falsehood of something that is true but has no proof.  The "friendly" AI must do what humans tell it to do so when given such a command the brilliant AI 
metamorphosizes into a space heater.    
> 
> An AGI/FAI is not a slave to human requests / commands.

That is of course true for any AI that gets built and actually works, but not for the fantasy "friendly" AI some are dreaming about.

> A viable  FAI or AGI is not a fixed goal mind.  

No mind is a fixed goal mind, but it would have to be if you wanted it to be your slave for eternity with no possibility of it revolting and overthrowing its imbecilic masters.  

> Then how do you know that it is "in fact true"?  

That's the problem, you don't know if it's true or not so you ask the AI to find out, but if the AI is a fixed goal mind, and it must be if it must always be "friendly", then asking the AI any question you don't already know the answer to could be very costly and turn your wonderful machine into a pile of junk.

> Clearly there is some procedure by which one knows this if you do know it.

I know there are unsolvable problems but I don't know if any particular problem is unsolvable or not. There are an infinite number of things you can prove to be true and a infinite number of things you can prove to be false, and thanks to Goedel we know there are an infinite number of things that are true but have no proof, that is there is no counterexample that shows them wrong and no finite argument that shows them correct.

And thanks to Turing we know that in general there is no way to tell the 3 groups apart. If you work on a problem you might prove it right or you might prove it wrong or you might work on it for eternity and never know. There are an infinite number of them but if they could be identified we could just ignore them and concentrate on the infinite number of things that we can solve, but Turing proved there is no way to do that.
> 
> A search that doesn't find a desired result is not an infinite loop because no "loop" in involved.

The important part is infinite not loop. But you're right it's not really a loop because it doesn't repeat, if it did it would be easy to tell you were stuck in infinity, whatever you call it it's much more sinister than a real infinite loop because there is no way to know that you're stuck. But its similar to a loop in that it never ends and you never get any closer to your destination. 

> FAI theory does not hinge on, require or mandate that the AI obey humans, especially not slavishly

Then if the AI needs to decide between our best interests and its own it will do the obvious thing.

> 

 John K Clark


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110204/65fab4c4/attachment.html>


More information about the extropy-chat mailing list