[ExI] Fwd: Re: AI risks
Anders Sandberg
anders at aleph.se
Mon Sep 7 23:24:47 UTC 2015
On 2015-09-07 17:08, Mike Dougherty wrote:
>
> On Sep 7, 2015 10:24 AM, "Ben" <bbenzai at yahoo.com
> <mailto:bbenzai at yahoo.com>> wrote:
>
> > As there is zero probability of all these different points of view
> ever agreeing, the whole concept of distinguishing between 'good use'
> and 'bad use' of AI is meaningless. I don't know what the answer is,
> but I do think it's a waste of time talking in these terms. We need a
> new angle on the whole thing.
>
> Replace AI in this thread with a similarly disruptive technology: fire.
>
> Fire is inherently dangerous.
> Fire is useful when wielded responsibly.
> Fire is a dangerous weapon.
>
> All true. Somehow we have survived.
>
But note the differences too. Fire behaves in a somewhat predictable way
that does not change, requires certain resources (air, heat, fuel) we
can control, its spread is limited by the extent of resource patches, it
is highly detectable and so on. It is exponential in the small, but
linear or bounded in the large. You cannot just set fire to the whole world.
Understanding the underlying properties of technologies is essential for
figuring out how to handle the risks. One of the problems with the AI
safety debate is that far too many people are unwilling or unable to
start decomposing the problem into understandable parts.
There is useful work to be done on the capacity for learning systems to
infer things from limited data, and the rate with which their power
grows. We can analyse how to control its capacities or motivations. We
can examine different schemes for governance, policing and monitoring,
as well as subtler methods of incentive structures. We can analyse the
resources needed for different kinds of AI, and our uncertainty about
them. We can investigate thresholds of entry to the technology, and what
determines their height. We can analyse the game theory of AI-races. We
can measure performance over time. And so on. Big questions, yes, but
far less handwavy and actually possible to make progress on.
This goes for a lot of technological threat. On one hand, go to the meta
level and abstract away details to see overall patterns that matter (low
threshold to entry? exponential? adaptive?), on the other hand decompose
the questions into chunks that can actually be investigated.
--
Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150908/111a78a4/attachment.html>
More information about the extropy-chat
mailing list