[extropy-chat] Fools building AIs
Eliezer S. Yudkowsky
sentience at pobox.com
Mon Oct 9 05:07:50 UTC 2006
Keith Henson wrote:
> At 09:11 PM 10/6/2006 -0700, Eliezer wrote:
>
>>I was talking about humans. So was Rafal.
>>
>>Plans interpretable as consistent with rationality for at least one mind
>>in mindspace may be, for a human randomly selected from modern-day
>>Earth, *very unlikely* to be consistent with that human's emotions and
>>morality.
>
> Unless the mind was designed to be consistent with that random human mind
> or grew out of a human upload of that mind.
I think the thread of argument is getting lost here. The thread was as
follows (my summary):
Ben: "My friend thinks the human species *should* die."
EY: "Then your friend doesn't strike me as a frontrunner for World's
Clearest Thinker."
Ben: "But that plan *could* be consistent with rationality, given
different goals."
EY: "We're not talking about an arbitrary nonhuman mind, we're talking
about your friend."
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat
mailing list