[ExI] Empathic AGI [WAS Safety of human-like motivation systems]

Richard Loosemore rpwl at lightlink.com
Mon Feb 7 17:53:55 UTC 2011


Stefano Vaj wrote:
> On 5 February 2011 17:39, Richard Loosemore <rpwl at lightlink.com> wrote:
>> This is exactly the line along which I am going.   I have talked in the
>> past about building AGI systems that are "empathic" to the human
>> species, and which are locked into that state of empathy by their
>> design.  Your sentence above:
>>
>>> It seems to me that one safety precaution we would want to have is
>>> for the first generation of AGI to see itself in some way as actually
>>> being human, or self identifying as being very close to humans.
>> ... captures exactly the approach I am taking.  This is what I mean by
>> building AGI systems that feel empathy for humans.  They would BE humans in
>> most respects.
> 
> If we accept that "normal" human-level empathy (that is, a mere
> ingredient in the evolutionary strategies) is enough, we just have to
> emulate a Darwinian machine as similar as possible in its behavioural
> making to ourselves, and this shall be automatically part of its
> repertoire - along with aggression, flight, sex, etc.
> 
> If, OTOH, your AGI is implemented in view of other goals than maximing
> its fitness, it will be neither "altruistic" nor "selfish", it will
> simply execute the other program(s) it is being given or instructed to
> develop as any other less or more intelligent, less or more dangerous,
> universal computing device.
> 

Non sequiteur.

As I explain in the parallel response to you other post, the dichotomy
you describe is utterly without foundation.



Richard Loosemore





More information about the extropy-chat mailing list