[extropy-chat] Fools building AIs
Eliezer S. Yudkowsky
sentience at pobox.com
Sat Oct 7 04:11:57 UTC 2006
Ben Goertzel wrote:
>>The problem with rationality and understanding is that they can be
>>coupled to something like 2^10^17 goal systems/attitudes, or more,
>>sometimes making them meaningless in the context of examining goals.
>>The problem is that the phrases "understanding" and "rationality" are
>>frequently value-loaded, when to make things simpler we should use
>>them just to describe the ability to better predict the next blip of
> Thanks. That is exactly the point I was trying to make.
I was talking about humans. So was Rafal.
Plans interpretable as consistent with rationality for at least one mind
in mindspace may be, for a human randomly selected from modern-day
Earth, *very unlikely* to be consistent with that human's emotions and
Especially if we interpret "consistency" as meaning "satisficing" or "at
least not being antiproductive" with respect to a normalization of the
human's emotions and morality, i.e., the morality they would have if
their otherwise identical emotions were properly aggregative over
extensional events rather than suffering from scope neglect and fast
evaluation by single salient features, etc.
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat