[ExI] Fwd: Re: AI risks

Anders Sandberg anders at aleph.se
Wed Sep 9 08:17:47 UTC 2015


On 2015-09-09 00:36, Adrian Tymes wrote:
>
> That sounds like a story I wrote.  The Earth was depopulated by 
> plague; said plague was designed to be targeted at some specific 
> subset of humanity, but was accidentally released without that target 
> set, and thus attacked everyone.
>

Racists often overestimate the genetic purity (and well-definedness) of 
ethnicities. Unfortunately that knowledge is likely only partially 
correlated with ability to edit genomes in scary ways.

I think the *existential* risk from accidents even with maliciously 
designed pathogens is pretty low, since it likely takes a fair bit of 
effort to kill everyone reliably. On the other hand the global 
catastrophic risk is pretty high, since getting a 50% depopulation from 
a bad pathogen seems entirely doable, and would disrupt the global 
infrastructure killing even more people.

I have started tinkering with a review of people and groups who have 
tried to destroy the world (both using real means and means they 
*thought* were real). It would be interesting to hear if anybody has a 
list - the one I have is so far very short. The GCR/xrisk threat seems 
to come more from side effects of the pursuits of non-omnicidal people, 
like the MAD doctrine.

-- 
Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150909/84d40861/attachment.html>


More information about the extropy-chat mailing list