[extropy-chat] Two draft papers: AI and existential risk; heuristics and biases
Peter McCluskey
extropy at bayesianinvestor.com
Fri Jun 23 22:20:29 UTC 2006
Robin Hanson writes:
>The obvious question about a single AI is why its improvements could
>not with the usual ease be transferred to other AIs or humans, or
>made available via trades with those others. If so, this single AI
>would just be part of our larger system of self-improvement. The
>scenario of rapid isolated self-improvement would seem to be where
>the AI found a new system of self-improvement, where knowledge
>production was far more effective, *and* where internal sharing of
>knowledge was vastly easier than external sharing.
>
>While this is logically possible, I do not yet see a reason to think
>it likely.
[I'm rejoining the list after a few years absence and wondering whether
I can handle the volumne via a spambayes filter.]
I agree with your criticisms of Eliezer's scenario of isolated
self-improvement (and I suspect he has a strong bias toward scenarios
under which his skills are most valuable), but if we alter the scenario
to include the likelyhood that the AI will need many cpu's interacting
with the real world, then I think most of what he says about the risks
remain plausible and your criticisms seem fairly weak.
An AI that can improve it's cognitive power faster than other intelligences,
even if it's takeoff is as slow as Microsoft's takeoff, would still create
the risk that it eventually becomes sufficiently powerful relative to others
to conquer them.
We see some signs that transfer of cognitive abilities from more intelligent
entities to less intelligent is limited by the abilities of the less
intelligent. Even if only a small fraction of cognitive abilities can't be
transferred to the less intelligent, that would appear to create a trend
of diverging abilities.
Some of the factors that limit those trends in biological organisms
(difficulties in coordinating larger assemblies of computing power and
i/o power) appear to be less effective at limiting digital intelligences,
so I'm less optimistic about a trend toward diverging abilities being
stopped as easily as with biological abilities.
How likely does this scenario need to be to scare you? It seems hard
to imagine a strong enough argument for or against it to justify assigning
a probability very far from 50% to it, and any probability not very far
from 50% is high enough to justify much of Eliezer's concerns.
Eliezer, I think your low-lying fruit metaphor is interesting, but when
arguing for it you seem to put most of your effort into arguing that there's
lot's of fruit up there somewhere, and not much effort into analyzing
whether the low-lying parts of it remain unpicked.
--
------------------------------------------------------------------------------
Peter McCluskey | Science is the belief in the ignorance of experts.
www.bayesianinvestor.com| - Richard Feynman
More information about the extropy-chat
mailing list