[extropy-chat] Eugen Leitl on AI design
Eliezer Yudkowsky
sentience at pobox.com
Fri Jun 4 09:51:23 UTC 2004
Jeff Davis wrote:
>
> I am of the "intelligence leads inevitably to ethics"
> school. (I consider ethics a form of advanced
> rationality. Which springs from the modeling and
> symbol manipulation emblematic of the quality which we
> fuzzily refer to as intelligence.) It has done so
> with humans, where the "intelligence"--such as it is,
> puny not "super"--has evolved from the mechanical
> randomness and cold indifference of material reality.
I too considered morality a special case of rationality, back in 1996-2000
before I understood exactly how it all worked. It's an easy enough mistake
to make. But the math says rationality is a special case of morality, not
the other way around; and rationality can be a special case of other
moralities than ours. Simple enough to show why Bayesian assignment of
probabilities is expected to be best, given a coherent utility function.
The problem is that it works for any coherent utility function, including
the paperclip maximizer.
Everyone please recall that I started out confidently stating "The Powers
will be ethical!" and then moved from that position to this one, driven by
overwhelmingly strong arguments. It shouldn't have taken overwhelmingly
strong arguments, and next time I shall endeavor to allow my beliefs to be
blown about like leaves on the winds of evidence, and also not make
confident statements about anything before I understand the fundamental
processes at work. But the overwhelmingly strong reasons that drove me to
this position are there, even if most of them are hard to explain. I
*know* about game theory. I *feel* the intuitive unreasonableness of a
superintelligent mind turning the solar system into paperclips. That was
why I made the mistake in 1996. Now that I understand the fundamentals, I
can see that it just doesn't work that way. My old intuitions were flat
wrong. So it goes.
You can find the old Eliezer, now long gone, at:
http://hanson.gmu.edu/vc.html#yudkowsky
I didn't change my mind arbitrarily. There are reasons why that Eliezer
later got up and said, "Oops, that old theory would have wiped out the
human species, sorry about that."
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat
mailing list