[extropy-chat] Fools building AIs (was: Tyranny in place)
rafal.smigrodzki at gmail.com
Thu Oct 5 22:58:23 UTC 2006
On 10/4/06, Eliezer S. Yudkowsky <sentience at pobox.com> wrote:
### I do think we are talking past each other to some extent. Above
you thoroughly discuss the notion that given widespread lack of
understanding of the art of rationality, most attempts at building a
GAI will either fizzle, or produce an UFAI, independently of the
motives of the would-be builders. I do not take issue with this claim.
You also seem to assume (but not say so explicitely) that the sets
"all people capable of independently shaping a GAI so as to act
according to their wishes" and "all evil people (i.e. people whose
goal is a net loss of utility)" are non-intersecting. This is where I
disagree with you.
I can visualize the mind with an excellent intuitive grasp of
rationality, a mind that understands itself, knows where its goals
came from, knows where it is going, and yet wishes nothing but to
fulfill the goals of dominance, and destruction. I am not talking
about the Sharia zombie, who may have benign goals (personal happiness
with virgins, etc.) perverted by ignorance and lack of understanding.
I am referring to people who are evil, know it, and like it so.
Of course, I may be wrong. Perhaps there is a cognitive sieve that
separates GAI builders and Dr. Evil. I also think that present
understanding of the issue is generally insufficient to allow
confident prediction. Therefore, until proven otherwise, the prospect
of truly evil geniuses with large AI budgets will continue to worry
me, more than the dangers of asteroid impacts but less than a flu
More information about the extropy-chat