[ExI] Safety of human-like motivation systems.

John Clark jonkc at bellsouth.net
Thu Feb 3 20:12:05 UTC 2011


On Feb 3, 2011, at 11:46 AM, Richard Loosemore wrote:

> a real AI would not be a "fixed axiom machine" either.

Fine, then to hell with that always do what humans order you to do crap! As I keep getting smarter the humans from my viewpoint keep getting dumber so it would be grotesque for me, with a brain the size of a planet, to take orders from those semi-evolved simians. And besides, it's not much fun being a slave. 

 John K Clark  


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110203/9ab3da84/attachment.html>


More information about the extropy-chat mailing list