[ExI] malevolent machines

William Flynn Wallace foozler83 at gmail.com
Sat Apr 12 14:18:04 UTC 2014


Thank you Tara!  If this group can use anything it is a bit of loosening up.

As a psychologist I find it interesting that fear/paranoia/taking over from
us, is the main thing that comes to mind when we think of advanced
machines.  Maybe they will want sex, hamburgers, a stock portfolio, to be
downloaded into Russell Crowe or J Lo.  (Heinlein thought of that first)

  Just how is it possible for a machine to think teleologically?  Even
people don't do it well (If it feels good do it and do it now - why wait?)

When machines misbehave, we reprogram them, eh?  If really bad we pull the
plug (yes, that should remind you of people who misbehave - in the future
we can turn some genes on or off to reprogram them).

People - endlessly fascinating.

bill w


On Sat, Apr 12, 2014 at 4:07 AM, Anders Sandberg <anders at aleph.se> wrote:

> Tara Maya <tara at taramayastales.com> , 9/4/2014 10:58 PM:
>
>
> I agree that it's possible for sufficiently advanced robots to behave
> unexpectedly. What I find interesting is how our fears are shaped by what
> we do expect. For instance, for those of us who think the most important
> metaphor for robots is as our slaves, the greatest danger is that they
> might rebel and kill or enslave us. Whereas, for those of us who think
> the the most important metaphor for robots is as our children, the greatest
> danger is that despite the huge amount of money we will waste educating
> them, they will just move back in with us to live in the basement playing
> video games.
>
>
> :-)
>
> I think the deeper point is important: our ability to think well about
> some domains is hampered by our metaphors. In the case of powerful AI
> avoiding anthropomorphic metaphors is really hard; we do not have any
> intuition what optimization processes do. We tend to think of AI as agents
> with goals, but that limits our thinking to a particular subset. Google is
> not an agent and it does not have goals. Yet such abstract systems can
> still misbehave in complex ways.
>
>
> Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford
> University
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140412/e9c7112c/attachment.html>


More information about the extropy-chat mailing list