[ExI] Fwd: Re: AI risks

Anders Sandberg anders at aleph.se
Wed Sep 9 23:37:58 UTC 2015


On 2015-09-09 14:16, Flexman, Connor wrote:
> On Tue, Sep 8, 2015 at 3:12 AM, Anders Sandberg <anders at aleph.se 
> <mailto:anders at aleph.se>> wrote:
>
>
>     "Ah, Anders, I see that you are actually working on the biohacking
>     project."
>     "Yes, I have this risk pipeline model showing that most of the
>     risk is from the disgruntled postdoc/biotech startup part of the
>     spectrum than the biohackers."
>     "Good work."
>     "And I figured out that I could get an order of magnitude more
>     fatalities by inserting a gene blocking..."
>     "Anders!"
>
>
> Is all this latent anxiety about boss oversight at all to do with the 
> recent work on neck-tie knots?

Hehe... it is a component. I suffer from academic ADHD - I have a hard 
time keeping to one project. And when I get offtopic I get seriously 
offtopic.

But at FHI we also take information hazards seriously. If we are trying 
to discover things truly important for the future and actions that can 
relevantly transform it, then we should expect to find some things that 
are massively risky to spread around. So we better learn to keep /stumm 
/about those. But it requires recognizing that they belong in this 
category...

-- 
Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150910/ce3c6a61/attachment.html>


More information about the extropy-chat mailing list