[extropy-chat] Fools building AIs

Keith Henson hkhenson at rogers.com
Mon Oct 9 03:39:09 UTC 2006


At 09:11 PM 10/6/2006 -0700, Eliezer wrote:

snip

>I was talking about humans.  So was Rafal.
>
>Plans interpretable as consistent with rationality for at least one mind
>in mindspace may be, for a human randomly selected from modern-day
>Earth, *very unlikely* to be consistent with that human's emotions and
>morality.

Unless the mind was designed to be consistent with that random human mind 
or grew out of a human upload of that mind.

>Especially if we interpret "consistency" as meaning "satisficing" or "at
>least not being antiproductive" with respect to a normalization of the
>human's emotions and morality, i.e., the morality they would have if
>their otherwise identical emotions were properly aggregative over
>extensional events rather than suffering from scope neglect and fast
>evaluation by single salient features, etc.

Hmm.

Keith Henson




More information about the extropy-chat mailing list