<p dir="ltr"><br>
On Sep 7, 2015 7:26 PM, "Anders Sandberg" <<a href="mailto:anders@aleph.se">anders@aleph.se</a>> wrote:</p>
<p dir="ltr">> But note the differences too. Fire behaves in a somewhat predictable way that does not change, requires certain resources (air, heat, fuel) we can control, its spread is limited by the extent of resource patches, it is highly detectable and so on. It is exponential in the small, but linear or bounded in the large. You cannot just set fire to the whole world.<br>
></p>
<p dir="ltr">Of course. We know those things about fire now; our primitive ancestors did not. </p>
<p dir="ltr">> Understanding the underlying properties of technologies is essential for figuring out how to handle the risks. One of the problems with the AI safety debate is that far too many people are unwilling or unable to start decomposing the problem into understandable parts.</p>
<p dir="ltr">Right. That's why I went as far back as fire for my threat analogy. Obviously nuclear weapons are more complicated than fire. I imagine many Californians are more concerned about destruction of their property by fire than by nuclear weapons. </p>
<p dir="ltr">In this regard I feel more threatened by intermittent power outages and the loss of Internet than murderous AI.</p>
<p dir="ltr">I'm also wary of DIY biohacking turning into a plague, but who wants to think about that when there are so many terminator images to put near your clickbait headlines?</p>