[extropy-chat] Two draft papers: AI and existential risk; heuristics and biases

Eliezer S. Yudkowsky sentience at pobox.com
Sun Jun 4 16:33:26 UTC 2006


These are drafts of my chapters for Nick Bostrom's forthcoming edited 
volume _Global Catastrophic Risks_.  I may not have much time for 
further editing, but if anyone discovers any gross mistakes, then 
there's still time for me to submit changes.

The chapters are:

_Cognitive biases potentially affecting judgment of global risks_
   http://singinst.org/Biases.pdf
An introduction to the field of heuristics and biases - the experimental 
psychology of reproducible errors of human judgment - with a special 
focus on global catastrophic risks.  However, this paper should be 
generally useful to anyone who hasn't previously looked into the 
experimental results on human error.  If you're going to read both 
chapters, I recommend that you read this one first.

_Artificial Intelligence and Global Risk_
   http://singinst.org/AIRisk.pdf
The new standard introductory material on Friendly AI.  Any links to 
_Creating Friendly AI_ should be redirected here.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list