[ExI] AI extinction risk
Tim Tyler
tim at tt1.org
Sun Mar 16 12:42:58 UTC 2014
On 15/03/2014 09:32, Bill Hibbard wrote:
> My recent papers about technical AI risk conclude with:
>
> This paper addresses unintended AI behaviors. However,
> I believe that the greater danger comes from the fact
> that above-human-level AI is likely to be a tool in
> military and economic competition among humans and thus
> have motives that are competitive toward some humans.
Military and economic competition between groups seem far more likely
to extinguish specific individuals to me too. It would therefore make
considerable sense for individuals to focus on these kinds of problem.
The rationale given for focusing on other risk scenarios seems to be
that military and economic competition between groups is *relatively*
unlikely to destroy everything - whereas things like "grey goo" or
civilization-scale wireheading could potentially result in everyone's
entire future being destroyed.
Any evolved predispositions humans have are likely to direct them
to focus on the first type of risk. I figure that these more personal
risks will receive plenty of attention in due course.
--
__________
|im |yler http://timtyler.org/ tim at tt1lock.org Remove lock to reply.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140316/3e7867e7/attachment.html>
More information about the extropy-chat
mailing list