[extropy-chat] Fools building AIs
Eliezer S. Yudkowsky
sentience at pobox.com
Fri Oct 6 00:28:13 UTC 2006
Rafal Smigrodzki wrote:
> On 10/4/06, Eliezer S. Yudkowsky <sentience at pobox.com> wrote:
> You also seem to assume (but not say so explicitely) that the sets
> "all people capable of independently shaping a GAI so as to act
> according to their wishes" and "all evil people (i.e. people whose
> goal is a net loss of utility)" are non-intersecting. This is where I
> disagree with you.
I think it is likely that the intersection is small - bear in mind, set
1 is damned small to begin with - but I do not claim that it is zero
summed up over all Everett branches. It's probably zero in any given
Everett branch.
But the point is that the set of people capable of building and shaping
an AGI, who are going to do it on command from their military superiors
to blow up a terrorist bunker somewhere, is "essentially zero".
> I can visualize the mind with an excellent intuitive grasp of
> rationality, a mind that understands itself, knows where its goals
> came from, knows where it is going, and yet wishes nothing but to
> fulfill the goals of dominance, and destruction.
A *human* mind? I think most people in this set would not be running on
strictly normal brainware; but maybe you could, for example, have a
genuine genius psychopath. I do not deny the possibility, but it seems
to have a low frequency.
> I am not talking
> about the Sharia zombie, who may have benign goals (personal happiness
> with virgins, etc.) perverted by ignorance and lack of understanding.
> I am referring to people who are evil, know it, and like it so.
That's pretty damn rare at IQ140+. Evil people who know they're evil
and like being evil are far more rare than evil people. Most famous
super-evil people are not in that small set.
> Of course, I may be wrong. Perhaps there is a cognitive sieve that
> separates GAI builders and Dr. Evil. I also think that present
> understanding of the issue is generally insufficient to allow
> confident prediction. Therefore, until proven otherwise, the prospect
> of truly evil geniuses with large AI budgets will continue to worry
> me, more than the dangers of asteroid impacts but less than a flu
> pandemic.
Well, yes, but:
Problem of truly evil geniuses who can build and shape AGI
<< problem of misguidedly altruistic geniuses who pick the wrong F
<< problem of genius-fools who turn their future light cones into paperclips
where << is the standard "much less than" symbol.
In my experience thus far, the notion of someone deliberately building
an evil AGI, is much appealed to by genius-fools searching for a
plausible-sounding excuse not to slow down: "We've got to beat those
bastards in the military! We don't have time to perfect our AI theory!"
Now this is a nonzero risk but the risk of genius-fools is far
greater, in the sense that I expect most AI-blasted Everett branches to
be wiped out by genius-fools, not truly evil supergeniuses. Because of
the vastly larger base prior favoring the former catastrophe scenario,
the partial derivative of dead Everett branches with respect to caution,
is negative with respect to a policy change that makes it even a tiny
bit easier to be an altruistic genius-fools, no matter how much harder
it makes it to be a truly evil supergenius. In fact, I expect (with
lower confidence) that many more dead Everett branches are wiped out by
genius-fool AI programmers than by nanotech or superviruses.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat
mailing list