[ExI] Empathic AGI [WAS Safety of human-like motivation systems]

Richard Loosemore rpwl at lightlink.com
Sun Feb 6 14:50:02 UTC 2011


Kelly Anderson wrote:
> On Sat, Feb 5, 2011 at 9:39 AM, Richard Loosemore <rpwl at lightlink.com> wrote:
>> Kelly,
>>
>> This is exactly the line along which I am going.   I have talked in the
>> past about building AGI systems that are "empathic" to the human
>> species, and which are locked into that state of empathy by their
>> design.
> 
> I would not propose to "design" empathy, but rather to "train" towards
> empathy. I envision raising AGIs just as one would raise a child. This
> would train them to think as though they were a human, or at least
> that they were adopted by humans. As they mature, the speed of
> learning could be sped up, or the net could be copied, and further
> learning could go in many directions, but that core of humanity is the
> most important thing to get right to ensure that the AGIs and future
> humans will live in some kind of harmony.

This is certainly something that you would want to do, but it is kind of 
orthogonal to the question of "designing" the empathy in the first 
place.  A system designed to be a psychopath, for example, would not 
benefit from that kind of upbringing.

So you have to do both.

>>  Your sentence above:
>>
>>> It seems to me that one safety precaution we would want to have is
>>> for the first generation of AGI to see itself in some way as actually
>>> being human, or self identifying as being very close to humans.
>> ... captures exactly the approach I am taking.  This is what I mean by
>> building AGI systems that feel empathy for humans.  They would BE humans in
>> most respects.
> 
> I thing AGIs should see us as their ancestors. I would hope to be
> thought of with the kind of respect we would feel for homo erectus
> (were they still around). Kurzweil states that increased intelligence
> leads to increased empathy, which is an interesting hypothesis. I
> wouldn't know how to test it, but it does seem to be a trend.

This idea that "increased intelligence leads to increased empathy" is a 
natural consequence of the idea that the system is making sure that all 
its ideas are consistent with one another, and with its basic 
motivations.  If its basic motivations start with the idea of empathy, 
then increased intelligence would indeed make the system more and more 
empathic.

>> I envision a project to systematically explore the behavior of the
>> motivation mechanisms.  In the research phases, we would be directly
>> monitoring the balance of power between the various motivation modules, and
>> also monitoring for certain patterns of thought.
> 
> Here you devolve into the vagueness that makes this discussion
> difficult for me. Are you talking of studying humans here?

Sorry, no, I mean studying the AGI mechanisms.  We do not have enough 
access to the inner, real-time workings of human systems.  This is 
strictly about studying the experimental AGIs, during the research and 
development phase.

> 
>> I cannot answer all your points in full detail, but it is worth noting that
>> things like the fanatic midset (suicide bombers, etc) are probably a result
>> of the interaction of motivation modules that would not be present in the
>> AGI.
> 
> Hopefully this will be the case. I tend towards optimism, so for the
> moment, I'll give you this point.
> 
>>  Foremost among them, the module that incites tribal  loyalty and
>> hatred (in-group, out-group feelings).  Without that kind of module
>> (assuming it is a distinct module) the system would perhaps have no chance
>> of drifting in that direction.
> 
> Here it sounds like we differ. I would propose that "young" AGIs be
> given to exemplary parents in every culture we can find. Raising them
> as they would their own youth, we preserve the richness of human
> diversity that we are risking losing today. After all, we are losing
> languages and culture to the global mono-culture at an alarming rate
> today just among humans. If all AGIs are taught in the same laboratory
> or western context, we will end up with a mono culture in the AGI
> strains that will potentially have a negative impact on preserving
> human diversity.

Although I completely agree with your goal here, I would say this is a 
different issue, with different answers.  Very good answers, I suggest, 
but somewhat peripheral to this discussion.

The crucial issue is, at the beginning, is to understand and build the 
correct foundations.  So, I am talking about giving the AGI the kind of 
underlying mechanisms that will make it grow towards a caring, empathic 
individual, and avoiding the kind of mechanisms that would make it 
psychopathic.  Then, and only then, comes the youthful experience of the 
AGI (which you are focussing on).  The experience part is important, but 
I am really only trying make arguments about the construction phase at 
the moment.

What it boils down to is the fact that some humans are born with damaged 
motivation mechanism, such that there is no ability to empathize and 
bond.  No amount of youthful happiness will matter to those people.  My 
primary concern at the moment is to understand that, and design AGIs so 
that does not happen.

> I respect other people's belief systems, and I want AGIs with all
> kinds of belief systems. Even if many of them end up evolving beyond
> their core training, having that core is important to maintaining
> empathy towards the group that has that core belief system. I would
> hate for AGIs to decide that the Amish were not worth preserving just
> because no AGI had ever been raised in an Amish household.

I seriously doubt that will happen.  But that is a discussion for 
another day.



>> And even in a suicide bomber, there are
>> other motivations fighting to take over and restore order, right up to the
>> last minute:  they sweat when they are about to go.
> 
> Perhaps. As one who has previously held strong religious beliefs, I
> can put myself into the head of a suicide bomber quite well, and I can
> see the possibility of not sweating it.
> 
>> Answering the ideas you throw into the ring, in your comment, would be
>> fodder for an entire essay.  Sometime soon, I hope...
> 
> Clearly, there is a lot of ground to cover. Here are some of the
> things I care about...
> 
> 1) How do we preserve the diversity of human culture as we evolve past
> being purely human?
> 2) How do we create AGIs?
> 3) How do we ensure that human beings (enhanced or natural) can
> continue to live in the same society with the AGIs?
> 4) How can we protect society from rogue AGIs?
> 5) How is this all best done without offending the religious majority
> and generating painful backlash? (i.e. How do you prevent a civil war
> between the religious fundamentalists and the AGIs?)

I have answers (proposed answers, at least).  But that is an entire 
book.  ;-)



Richard Loosemore



More information about the extropy-chat mailing list