[ExI] AI extinction risk

Bill Hibbard test at ssec.wisc.edu
Sat Mar 15 13:32:47 UTC 2014


Anders Sandberg <anders at aleph.se> wrote:

> The AI disaster scenario splits into two cases: the
> good old "superintelligence out of control" we have
> spent much effort at handling, and the "AI empowered
> people out of control" scenario, which is a tricky
> 'thick' socioeconomical problem.?

Excellent point.

My recent papers about technical AI risk conclude with:

   This paper addresses unintended AI behaviors. However,
   I believe that the greater danger comes from the fact
   that above-human-level AI is likely to be a tool in
   military and economic competition among humans and thus
   have motives that are competitive toward some humans.

At the AGI-11 Future of AGI Workshop I presented this talk:

http://agi-conference.org/2011/bill-hibbard-abstract/

This paper was rejected for the AGI-09 Workshop on the
Future of AI:

http://www.ssec.wisc.edu/~billh/g/hibbard_agi09_workshop.pdf

This letter was published in the NY TImes in 2008:

http://www.nytimes.com/2008/09/02/science/02lett-SUPERINTELLI_LETTERS.html

And I discussed this issue in a 2008 JET paper:

http://jetpress.org/v17/hibbard.htm

My first publications about the dangers of AI in 2001
and 2002 assumed that the primary AI risk was social
and political rather than technical.

I'd like to see this risk get a much higher profile
from our community. The recent Oxford study about the
employment impact of AI is an excellent step.



More information about the extropy-chat mailing list