<div dir="ltr"><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:rgb(11,83,148)">Thank you Tara! If this group can use anything it is a bit of loosening up.<br><br>As a psychologist I find it interesting that fear/paranoia/taking over from us, is the main thing that comes to mind when we think of advanced machines. Maybe they will want sex, hamburgers, a stock portfolio, to be downloaded into Russell Crowe or J Lo. (Heinlein thought of that first)<br>
<br> Just how is it possible for a machine to think teleologically? Even people don't do it well (If it feels good do it and do it now - why wait?)<br><br>When machines misbehave, we reprogram them, eh? If really bad we pull the plug (yes, that should remind you of people who misbehave - in the future we can turn some genes on or off to reprogram them).<br>
<br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:rgb(11,83,148)">People - endlessly fascinating.<br><br>bill w<br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">
On Sat, Apr 12, 2014 at 4:07 AM, Anders Sandberg <span dir="ltr"><<a href="mailto:anders@aleph.se" target="_blank">anders@aleph.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><div><span title="tara@taramayastales.com">Tara Maya</span><span> <<a href="mailto:tara@taramayastales.com" target="_blank">tara@taramayastales.com</a>></span> , 9/4/2014 10:58 PM:<div class=""><br><blockquote style="margin:0 0 0 .8ex;border-left:2px blue solid;padding-left:1ex">
<div><div><br></div><div><div>I agree that it's possible for sufficiently advanced robots to behave unexpectedly. What I find interesting is how our fears are shaped by what we do expect. For instance, for those of us who think the most important metaphor for robots is as our slaves, the greatest danger is that they might rebel and kill or enslave us. Whereas, for those of us who think the the most important metaphor for robots is as our children, the greatest danger is that despite the huge amount of money we will waste educating them, they will just move back in with us to live in the basement playing video games.</div>
</div></div></blockquote></div></div><div><br></div><div>:-)</div><div><br></div><div>I think the deeper point is important: our ability to think well about some domains is hampered by our metaphors. In the case of powerful AI avoiding anthropomorphic metaphors is really hard; we do not have any intuition what optimization processes do. We tend to think of AI as agents with goals, but that limits our thinking to a particular subset. Google is not an agent and it does not have goals. Yet such abstract systems can still misbehave in complex ways.</div>
<div class=""><div><br></div><br><div><blockquote style="margin:0 0 0 .8ex;border-left:2px blue solid;padding-left:1ex"><div><div><div><div></div></div></div></div></blockquote></div>Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University</div>
</div><br>_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
<br></blockquote></div><br></div>