[extropy-chat] Two draft papers: AI and existential risk; heuristics and biases
Peter McCluskey
extropy at bayesianinvestor.com
Mon Jun 26 18:58:18 UTC 2006
sentience at pobox.com (Eliezer S. Yudkowsky) writes:
>Peter McCluskey wrote:
>>
>> I agree with your criticisms of Eliezer's scenario of isolated
>> self-improvement (and I suspect he has a strong bias toward scenarios
>> under which his skills are most valuable), but if we alter the scenario
>> to include the likelyhood that the AI will need many cpu's interacting
>> with the real world, then I think most of what he says about the risks
>> remain plausible and your criticisms seem fairly weak.
>
>Peter,
>
>Why would my skills be any less valuable under a scenario where the AI
>needs many CPUs interacting with the real world, leaving the rest of the
>scenario unchanged?
One obvious reason is that your skill at acquiring the money needed for
a large server farm appears to be limited. If cpu power is the most
important requirement for AI, then your chances of creating an AI before
Google does seem small. (I'm assuming you still hope to create the first
AI yourself. It is the belief that you have the skills for that which I
think biases you. You might have more valuable skills that enable you to
talk others out of creating an unfriendly AI, but those skills don't
appear to be biasing you much.)
To put it in more general terms, I can imagine a big range of possibilities
for what is required to create the first AI, from a very simple algorithm
that works on a single isolated cpu, to a system with much of the complexity
of human intelligence. Only the simpler systems can be created by the small
group of programmers that you are likely to assemble. The more complex
systems require more programmers (for a complex set of algorithms) and
hardware than I think you have the skills to assemble.
>I would not be overly surprised if the *first* AI, even a Friendly AI,
>is most conveniently built with many CPUs. Realtime, unrestricted
>Internet access is perhaps unwise; but if it is convenient to have a
>small robotics lab, there is nothing wrong with that. However, various
>incautious persons who believe that an AI absolutely cannot possibly
>exist without many CPUs and a realtime interface, and who use this
>uncalculated intuition to justify not taking other safety precautions,
>may lead me to be emphatic about the *possibility* that an AI can do
>just fine with one CPU and a VT100 terminal.
It is appropriate for you to criticise that kind of certainty, but
I suspect that pointing out an overconfidence bias is more effective
than sounding like you might be overconfident about the possibility
of an isolated AI takeoff.
--
------------------------------------------------------------------------------
Peter McCluskey | Science is the belief in the ignorance of experts.
www.bayesianinvestor.com| - Richard Feynman
More information about the extropy-chat
mailing list