[ExI] malevolent machines

Anders Sandberg anders at aleph.se
Sat Apr 12 09:07:45 UTC 2014


Tara Maya <tara at taramayastales.com> , 9/4/2014 10:58 PM:

I agree that it's possible for sufficiently advanced robots to behave unexpectedly. What I find interesting is how our fears are shaped by what we do expect. For instance, for those of us who think the most important metaphor for robots is as our slaves, the greatest danger is that they might rebel and kill or enslave us. Whereas, for those of us who think the the most important metaphor for robots is as our children, the greatest danger is that despite the huge amount of money we will waste educating them, they will just move back in with us to live in the basement playing video games.
:-)
I think the deeper point is important: our ability to think well about some domains is hampered by our metaphors. In the case of powerful AI avoiding anthropomorphic metaphors is really hard; we do not have any intuition what optimization processes do. We tend to think of AI as agents with goals, but that limits our thinking to a particular subset. Google is not an agent and it does not have goals. Yet such abstract systems can still misbehave in complex ways.

Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140412/e9145495/attachment.html>


More information about the extropy-chat mailing list