[ExI] AI motivation, was malevolent machines

Keith Henson hkeithhenson at gmail.com
Thu Apr 10 21:31:43 UTC 2014


I have talked about this for close to a decade now, first on the SL4
list.  It's a really slow idea to catch on.

I think programming AIs to seek status the way evolution has wired up
humans is a relatively safe motivation.

Even if we don't appreciate this as a drive (I think it is something
we are wired *not* to understand) it is the motivation for most of
what humans do, from playing well in WoW to the Nobel prize.

I think a machine that was motivated to improve its status in the eyes
of both humans and other machines would be relatively safe.

Keith



More information about the extropy-chat mailing list