[ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...]
John Clark
jonkc at bellsouth.net
Wed Feb 2 18:01:18 UTC 2011
On Feb 2, 2011, at 11:40 AM, Richard Loosemore wrote:
> No, humans by themselves are (mild understatement) not safe.
True, and the reason is that the human mind does not work on a fixed goal structure, no goal is always in the number one spot not even the goal for self preservation. And the reason Evolution never developed a fixed goal intelligence is that it is impossible. As Turing proved over 70 years ago such a mind would be doomed to fall into infinite loops.
> the bottom line is that when the system is controlled in this way, the stability of the motivation system is determined by a very large number of mutually-reinforcing contraints, so if the system starts with intentions that are (shall we say) broadly empathic with the human species, it cannot start to conceive new, bizarre motivations that break a significant number of those constraints.
So when the humans tell the AI to do something that can not be done, something very easy to do, your multi billion dollar AI turns into an elaborate space heater because unlike humans the AI has a fixed goal motivation system so nothing ever bores it, not even infinite loops.
> It is always settling back toward a large global attractor.
And it keeps plugging away at the unsolvable problem for eternity, or at least until the humans get bored with the useless piece of junk and pull the plug on it.
> If you subtract out those unwanted modules what you have left is an altruistic saint of an AGI
I had no idea that the American Geological Institute was such a virtuous organization.
John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110202/3e0e042c/attachment.html>
More information about the extropy-chat
mailing list