[ExI] Empathic AGI [WAS Safety of human-like motivation systems]

Stefano Vaj stefano.vaj at gmail.com
Tue Feb 8 11:19:49 UTC 2011


On 7 February 2011 18:47, Samantha Atkins <sjatkins at mac.com> wrote:
> Human empathy is not that deep nor is empathy per se some free floating good.   Why would we want an AGI that was pretty much just like a human except presumably much more powerful?

I can think only of two reasons:
- for the same reason we may want to develop an emulation of a cat or
of a bug, that is, for the sake of it, as an achievement which is
interesting per se;
- for the same reason we paint realistic portraits of living human
beings, to perpetuate some or most of their traits for the foreseeable
future (see under "upload").

For everything else, computers may become indefinitely more
intelligent and ingenuous at resolving diverse categories of problems
without exhibiting any bio-like features such as altruism,
selfishness, aggression, sexual drive, will to power, empathy, etc.
more than they do today.

> Altruistic and selfish are quite overloaded and nearly useless concepts as generally used.

I suspect that you are right.

-- 
Stefano Vaj




More information about the extropy-chat mailing list