[ExI] Empathic AGI [WAS Safety of human-like motivation systems]
Samantha Atkins
sjatkins at mac.com
Wed Feb 9 03:06:35 UTC 2011
On 02/08/2011 03:19 AM, Stefano Vaj wrote:
> On 7 February 2011 18:47, Samantha Atkins<sjatkins at mac.com> wrote:
>> Human empathy is not that deep nor is empathy per se some free floating good. Why would we want an AGI that was pretty much just like a human except presumably much more powerful?
> I can think only of two reasons:
> - for the same reason we may want to develop an emulation of a cat or
> of a bug, that is, for the sake of it, as an achievement which is
> interesting per se;
> - for the same reason we paint realistic portraits of living human
> beings, to perpetuate some or most of their traits for the foreseeable
> future (see under "upload").
>
> For everything else, computers may become indefinitely more
> intelligent and ingenuous at resolving diverse categories of problems
> without exhibiting any bio-like features such as altruism,
If by altruism you mean sacrificing your values, just because they are
yours, to the values of others, just because they are not yours, then it
is a very bizarre thing to glorify, practice or hope that our AGIs
practice. It is on the face of it hopelessly irrational and
counter-productive toward achieving what we actually value. If an AGI
practices that just on the grounds someone said they "should" then it is
need of a serious debugging.
- samantha
More information about the extropy-chat
mailing list