[ExI] Empathic AGI [WAS Safety of human-like motivation systems]

Kelly Anderson kellycoinguy at gmail.com
Sun Feb 6 07:50:26 UTC 2011

On Sat, Feb 5, 2011 at 9:39 AM, Richard Loosemore <rpwl at lightlink.com> wrote:
> Kelly,
> This is exactly the line along which I am going.   I have talked in the
> past about building AGI systems that are "empathic" to the human
> species, and which are locked into that state of empathy by their
> design.

I would not propose to "design" empathy, but rather to "train" towards
empathy. I envision raising AGIs just as one would raise a child. This
would train them to think as though they were a human, or at least
that they were adopted by humans. As they mature, the speed of
learning could be sped up, or the net could be copied, and further
learning could go in many directions, but that core of humanity is the
most important thing to get right to ensure that the AGIs and future
humans will live in some kind of harmony.

> Your sentence above:
>> It seems to me that one safety precaution we would want to have is
>> for the first generation of AGI to see itself in some way as actually
>> being human, or self identifying as being very close to humans.
> ... captures exactly the approach I am taking.  This is what I mean by
> building AGI systems that feel empathy for humans.  They would BE humans in
> most respects.

I thing AGIs should see us as their ancestors. I would hope to be
thought of with the kind of respect we would feel for homo erectus
(were they still around). Kurzweil states that increased intelligence
leads to increased empathy, which is an interesting hypothesis. I
wouldn't know how to test it, but it does seem to be a trend.

> I envision a project to systematically explore the behavior of the
> motivation mechanisms.  In the research phases, we would be directly
> monitoring the balance of power between the various motivation modules, and
> also monitoring for certain patterns of thought.

Here you devolve into the vagueness that makes this discussion
difficult for me. Are you talking of studying humans here?

> I cannot answer all your points in full detail, but it is worth noting that
> things like the fanatic midset (suicide bombers, etc) are probably a result
> of the interaction of motivation modules that would not be present in the
> AGI.

Hopefully this will be the case. I tend towards optimism, so for the
moment, I'll give you this point.

> Foremost among them, the module that incites tribal  loyalty and
> hatred (in-group, out-group feelings).  Without that kind of module
> (assuming it is a distinct module) the system would perhaps have no chance
> of drifting in that direction.

Here it sounds like we differ. I would propose that "young" AGIs be
given to exemplary parents in every culture we can find. Raising them
as they would their own youth, we preserve the richness of human
diversity that we are risking losing today. After all, we are losing
languages and culture to the global mono-culture at an alarming rate
today just among humans. If all AGIs are taught in the same laboratory
or western context, we will end up with a mono culture in the AGI
strains that will potentially have a negative impact on preserving
human diversity.

I respect other people's belief systems, and I want AGIs with all
kinds of belief systems. Even if many of them end up evolving beyond
their core training, having that core is important to maintaining
empathy towards the group that has that core belief system. I would
hate for AGIs to decide that the Amish were not worth preserving just
because no AGI had ever been raised in an Amish household.

> And even in a suicide bomber, there are
> other motivations fighting to take over and restore order, right up to the
> last minute:  they sweat when they are about to go.

Perhaps. As one who has previously held strong religious beliefs, I
can put myself into the head of a suicide bomber quite well, and I can
see the possibility of not sweating it.

> Answering the ideas you throw into the ring, in your comment, would be
> fodder for an entire essay.  Sometime soon, I hope...

Clearly, there is a lot of ground to cover. Here are some of the
things I care about...

1) How do we preserve the diversity of human culture as we evolve past
being purely human?
2) How do we create AGIs?
3) How do we ensure that human beings (enhanced or natural) can
continue to live in the same society with the AGIs?
4) How can we protect society from rogue AGIs?
5) How is this all best done without offending the religious majority
and generating painful backlash? (i.e. How do you prevent a civil war
between the religious fundamentalists and the AGIs?)


More information about the extropy-chat mailing list