[ExI] Empathic AGI [WAS Safety of human-like motivation systems]

Samantha Atkins sjatkins at mac.com
Mon Feb 7 17:47:30 UTC 2011


On Feb 7, 2011, at 9:16 AM, Stefano Vaj wrote:

> On 5 February 2011 17:39, Richard Loosemore <rpwl at lightlink.com> wrote:
>> This is exactly the line along which I am going.   I have talked in the
>> past about building AGI systems that are "empathic" to the human
>> species, and which are locked into that state of empathy by their
>> design.  Your sentence above:
>> 
>>> It seems to me that one safety precaution we would want to have is
>>> for the first generation of AGI to see itself in some way as actually
>>> being human, or self identifying as being very close to humans.
>> 
>> ... captures exactly the approach I am taking.  This is what I mean by
>> building AGI systems that feel empathy for humans.  They would BE humans in
>> most respects.
> 
> If we accept that "normal" human-level empathy (that is, a mere
> ingredient in the evolutionary strategies) is enough, we just have to
> emulate a Darwinian machine as similar as possible in its behavioural
> making to ourselves, and this shall be automatically part of its
> repertoire - along with aggression, flight, sex, etc.

Human empathy is not that deep nor is empathy per se some free floating good.   Why would we want an AGI that was pretty much just like a human except presumably much more powerful?

> 
> If, OTOH, your AGI is implemented in view of other goals than maximing
> its fitness, it will be neither "altruistic" nor "selfish", it will
> simply execute the other program(s) it is being given or instructed to
> develop as any other less or more intelligent, less or more dangerous,
> universal computing device.

Altruistic and selfish are quite overloaded and nearly useless concepts as generally used.

- s



More information about the extropy-chat mailing list