[ExI] Empathic AGI

Samantha Atkins sjatkins at mac.com
Fri Feb 11 18:38:16 UTC 2011


On 02/09/2011 09:41 AM, John Clark wrote:
> On Feb 7, 2011, at 12:16 PM, Stefano Vaj wrote:
>>
>> If we accept that "normal" human-level empathy (that is, a mere
>> ingredient in the evolutionary strategies) is enough, we just have to
>> emulate a Darwinian machine as similar as possible
>
> Two difficulties with that:
>
> 1) The Darwinian process is more like history than mathematics, it is 
> not repeatable, very small changes in initial conditions could lead to 
> huge differences in output.
>
> 2) Human-level empathy is aimed at Human-level beings, the further 
> from that level the less empathy we have. We have less empathy for a 
> cow than a person and less for an insect than a cow. As the AI's 
> intelligence gets larger its empathy for us will get smaller although 
> its empathy for its own kind might be enormous.
>

Yes, we understand how interdependent peer level beings will naturally 
develop a set of ethical guides for how they treat one another and the 
ability to model one another.  We don't have much/any idea of how this 
would arise among beings of radically different natures and abilities 
there are not so interdependent regarding their treatment of one another.

- samantha




More information about the extropy-chat mailing list