[ExI] Fwd: Re: AI risks

Mike Dougherty msd001 at gmail.com
Tue Sep 8 01:08:12 UTC 2015


On Sep 7, 2015 7:26 PM, "Anders Sandberg" <anders at aleph.se> wrote:

> But note the differences too. Fire behaves in a somewhat predictable way
that does not change, requires certain resources (air, heat, fuel) we can
control, its spread is limited by the extent of resource patches, it is
highly detectable and so on. It is exponential in the small, but linear or
bounded in the large. You cannot just set fire to the whole world.
>

Of course. We know those things about fire now;  our primitive ancestors
did not.

> Understanding the underlying properties of technologies is essential for
figuring out how to handle the risks. One of the problems with the AI
safety debate is that far too many people are unwilling or unable to start
decomposing the problem into understandable parts.

Right. That's why I went as far back as fire for my threat analogy.
Obviously nuclear weapons are more complicated than fire.  I imagine many
Californians are more concerned about destruction of their property by fire
than by nuclear weapons.

In this regard I feel more threatened by intermittent power outages and the
loss of Internet than murderous AI.

I'm also wary of DIY biohacking turning into a plague, but who wants to
think about that when there are so many terminator images to put near your
clickbait headlines?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150907/cc8bca58/attachment.html>


More information about the extropy-chat mailing list