<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div>On Feb 4, 2011, at 12:01 PM, Richard Loosemore wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><span class="Apple-style-span" style="border-collapse: separate; font-family: Verdana; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; ">Any intelligent system must have motivations<span class="Apple-converted-space"> </span></span></blockquote><br></div><div>Yes certainly, but the motivations of anything intelligent never remain constant. A fondness for humans might motivate a AI to have empathy and behave benevolently toward those creatures that made it for millions, maybe even billions, of nanoseconds; but there is no way you can be certain that its motivation will not change many many nanoseconds from now. </div><div><br></div><div> John K Clark </div><br></body></html>