[ExI] Empathic AGI [WAS Safety of human-like motivation systems]

Richard Loosemore rpwl at lightlink.com
Sat Feb 5 16:39:53 UTC 2011


Kelly Anderson wrote:
> On Fri, Feb 4, 2011 at 10:01 AM, Richard Loosemore
> <rpwl at lightlink.com> wrote:
>> Any intelligent system must have motivations (drives, goals, etc)
>> if it is to act intelligently in the real world.  Those motivations
>> are sometimes trivially simple, and sometimes they are not
>> *explicitly* coded, but are embedded in the rest of the system
>> ...... but either way there must be something that answers to the
>> description of "motivation mechanism", or the system will sit there
>> and do nothing at all. Whatever part of the AGI makes it organize
>> its thoughts to some end, THAT is the motivation mechanism.
> 
> Richard, This is very clearly stated, and I agree with it 100%. 
> Motivation is a kind of meta-context that influences how intelligent 
> agents process everything. I think it remains to be seen whether we 
> can create intelligences that lack certain "undesirable" human 
> motivations without creating psychological monstrosities.
> 
> There are a number of interesting psychological monstrosities from
> the science fiction genre. The one that occurs to me at the moment is
> from the Star Trek Next Generation episode entitled "The Perfect
> Mate" http://en.wikipedia.org/wiki/The_Perfect_Mate Where a woman is
> genetically designed to bond with a man in a way reminiscent to how
> birds bond to the first thing they see when they hatch. The point
> being that when you start making some motivations stronger than
> others, you can end up with very strange and unpredictable results.
> 
> Of course, this happens in humans too. Snake charming Pentecostal 
> religions and suicide bombers come to mind amongst many others.
> 
> In our modern (and hopefully rational) minds, we see a lot of 
> motivations as being irrational, or dangerous. But are those 
> motivations also necessary to be human? It seems to me that one
> safety precaution we would want to have is for the first generation
> of AGI to see itself in some way as actually being human, or self
> identifying as being very close to humans. If they see real human
> beings as their "parents" that might be helpful to creating safer
> systems.
> 
> One of the key questions for me is just what belief systems are 
> desirable for AGIs. Should some be "raised" Muslim, Catholic,
> Atheist, etc? What moral and ethical systems do we teach AGIs? All of
> the systems? Some of them? Do we turn off the ones that don't "turn
> out right". There are a lot of interesting questions here in my mind.
> 
> 
> To duplicate as many human cultures in our descendants as we can,
> even if they are not strictly biologically humans, seems like a good
> way to insure that those cultures continue to flourish. Or, do we
> just create all AGIs with a mono-culture? That seems like a big loss
> of richness. On the other hand, differing cultures cause many
> conflicts.


Kelly,

This is exactly the line along which I am going.   I have talked in the
past about building AGI systems that are "empathic" to the human
species, and which are locked into that state of empathy by their
design.  Your sentence above:

> It seems to me that one safety precaution we would want to have is
> for the first generation of AGI to see itself in some way as actually
> being human, or self identifying as being very close to humans.

... captures exactly the approach I am taking.  This is what I mean by 
building AGI systems that feel empathy for humans.  They would BE humans 
in most respects.

I envision a project to systematically explore the behavior of the 
motivation mechanisms.  In the research phases, we would be directly 
monitoring the balance of power between the various motivation modules, 
and also monitoring for certain patterns of thought.

I cannot answer all your points in full detail, but it is worth noting 
that things like the fanatic midset (suicide bombers, etc) are probably 
a result of the interaction of motivation modules that would not be 
present in the AGI.  Foremost among them, the module that incites tribal 
  loyalty and hatred (in-group, out-group feelings).  Without that kind 
of module (assuming it is a distinct module) the system would perhaps 
have no chance of drifting in that direction.  And even in a suicide 
bomber, there are other motivations fighting to take over and restore 
order, right up to the last minute:  they sweat when they are about to go.

Answering the ideas you throw into the ring, in your comment, would be 
fodder for an entire essay.  Sometime soon, I hope...




Richard Loosemore




More information about the extropy-chat mailing list