[extropy-chat] Two draft papers: AI and existential risk; heuristics and biases

Eliezer S. Yudkowsky sentience at pobox.com
Fri Jun 23 23:53:16 UTC 2006


Peter McCluskey wrote:
> 
> I agree with your criticisms of Eliezer's scenario of isolated
> self-improvement (and I suspect he has a strong bias toward scenarios
> under which his skills are most valuable), but if we alter the scenario
> to include the likelyhood that the AI will need many cpu's interacting
> with the real world, then I think most of what he says about the risks
> remain plausible and your criticisms seem fairly weak.

Peter,

Why would my skills be any less valuable under a scenario where the AI 
needs many CPUs interacting with the real world, leaving the rest of the 
scenario unchanged?

I would not be overly surprised if the *first* AI, even a Friendly AI, 
is most conveniently built with many CPUs.  Realtime, unrestricted 
Internet access is perhaps unwise; but if it is convenient to have a 
small robotics lab, there is nothing wrong with that.  However, various 
incautious persons who believe that an AI absolutely cannot possibly 
exist without many CPUs and a realtime interface, and who use this 
uncalculated intuition to justify not taking other safety precautions, 
may lead me to be emphatic about the *possibility* that an AI can do 
just fine with one CPU and a VT100 terminal.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list