[extropy-chat] AI design

Adrian Tymes wingcat at pacbell.net
Thu Jun 3 16:14:04 UTC 2004


--- Eliezer Yudkowsky <sentience at pobox.com> wrote:
> That, Damien, is why I now refer to them as Really
> Powerful Optimization
> Processes, rather than the anthropomorphism
> "superintelligence".  That 
> which humans regard as "common sense in the domain
> of moral argument" costs 
> extra, just as natural selection, another alien
> optimization process, has 
> no common sense in the domain of moral argument.

Except, maybe, where these morals are based on
reality, and both moral selection and SI decisions
are also based on reality?  The trick is to understand
the benefits one gets from acting morally, and to
honestly judge whether they apply in any given
situation.  Said benefits mostly break down into
social benefits, which an SI might understandably
ignore (what need has it for the approval of humans),
and long-term productivity/efficiency benefits, which
a non-suicidal SI would most assuredly not ignore.

This is the part of the equation you seem to be
missing, which others sense intuitively but have
difficulty expressing consciously (given the
not-always-conscious nature of morality).  That, and
the capacity for SIs to overcome their optimization
functions and decide on new ones - for example, the
paperclip maximizer who would realize that paperclips
only have meaning if there's something for them to
clip, and other sentient units for the convenience of
a clip to serve.  (Unless you propose that a SI would
not strive to understand why it does what it does,
which would seem to strongly interfere with any
capability for self-improvement.)



More information about the extropy-chat mailing list