<html><head><style type=text/css><!--
.mcnt {word-wrap:break-word;-webkit-nbsp-mode:space;-webkit-line-break:after-white-space;}
--></style></head><body><div><span title="tara@taramayastales.com">Tara Maya</span><span class="detail"> <tara@taramayastales.com></span> , 9/4/2014 10:58 PM:<br><blockquote class="mcnt mori" style="margin:0 0 0 .8ex;border-left:2px blue solid;padding-left:1ex;"><div class="mcnt"><div><br></div><div><div>I agree that it's possible for sufficiently advanced robots to behave unexpectedly. What I find interesting is how our fears are shaped by what we do expect. For instance, for those of us who think the most important metaphor for robots is as our slaves, the greatest danger is that they might rebel and kill or enslave us. Whereas, for those of us who think the the most important metaphor for robots is as our children, the greatest danger is that despite the huge amount of money we will waste educating them, they will just move back in with us to live in the basement playing video games.</div></div></div></blockquote></div><div><br></div><div>:-)</div><div><br></div><div>I think the deeper point is important: our ability to think well about some domains is hampered by our metaphors. In the case of powerful AI avoiding anthropomorphic metaphors is really hard; we do not have any intuition what optimization processes do. We tend to think of AI as agents with goals, but that limits our thinking to a particular subset. Google is not an agent and it does not have goals. Yet such abstract systems can still misbehave in complex ways.</div><div><br></div><br><div><blockquote class="mcnt mori" style="margin:0 0 0 .8ex;border-left:2px blue solid;padding-left:1ex;"><div class="mcnt"><div><div><div></div></div></div></div></blockquote></div>Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University</body></html>