[ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...]

John Clark jonkc at bellsouth.net
Fri Feb 4 17:26:31 UTC 2011

On Feb 4, 2011, at 12:01 PM, Richard Loosemore wrote:

> Any intelligent system must have motivations 

Yes certainly, but the motivations of anything intelligent never remain constant. A fondness for humans might motivate a AI to have empathy and behave benevolently toward those creatures that made it for millions, maybe even billions, of nanoseconds; but there is no way you can be certain that its motivation will not change many many nanoseconds from now. 

  John K Clark 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110204/7333a1f3/attachment.html>

More information about the extropy-chat mailing list