<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
On 2015-09-07 17:08, Mike Dougherty wrote:<br>
<blockquote
cite="mid:CAOJFdbLz=xA8swU4QwapKDqeLf8KGKV+OZNM-CcsEvwpe+masw@mail.gmail.com"
type="cite">
<p dir="ltr">
On Sep 7, 2015 10:24 AM, "Ben" <<a moz-do-not-send="true"
href="mailto:bbenzai@yahoo.com">bbenzai@yahoo.com</a>>
wrote:</p>
<p dir="ltr">> As there is zero probability of all these
different points of view ever agreeing, the whole concept of
distinguishing between 'good use' and 'bad use' of AI is
meaningless. I don't know what the answer is, but I do think
it's a waste of time talking in these terms. We need a new
angle on the whole thing.</p>
<p dir="ltr">Replace AI in this thread with a similarly disruptive
technology: fire.</p>
<p dir="ltr">Fire is inherently dangerous.<br>
Fire is useful when wielded responsibly.<br>
Fire is a dangerous weapon.</p>
<p dir="ltr">All true. Somehow we have survived.<br>
</p>
</blockquote>
<br>
But note the differences too. Fire behaves in a somewhat predictable
way that does not change, requires certain resources (air, heat,
fuel) we can control, its spread is limited by the extent of
resource patches, it is highly detectable and so on. It is
exponential in the small, but linear or bounded in the large. You
cannot just set fire to the whole world.<br>
<br>
Understanding the underlying properties of technologies is essential
for figuring out how to handle the risks. One of the problems with
the AI safety debate is that far too many people are unwilling or
unable to start decomposing the problem into understandable parts. <br>
<br>
There is useful work to be done on the capacity for learning systems
to infer things from limited data, and the rate with which their
power grows. We can analyse how to control its capacities or
motivations. We can examine different schemes for governance,
policing and monitoring, as well as subtler methods of incentive
structures. We can analyse the resources needed for different kinds
of AI, and our uncertainty about them. We can investigate thresholds
of entry to the technology, and what determines their height. We can
analyse the game theory of AI-races. We can measure performance over
time. And so on. Big questions, yes, but far less handwavy and
actually possible to make progress on. <br>
<br>
This goes for a lot of technological threat. On one hand, go to the
meta level and abstract away details to see overall patterns that
matter (low threshold to entry? exponential? adaptive?), on the
other hand decompose the questions into chunks that can actually be
investigated. <br>
<br>
<pre class="moz-signature" cols="72">--
Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University</pre>
</body>
</html>