[ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...]
Kelly Anderson
kellycoinguy at gmail.com
Sat Feb 5 08:03:12 UTC 2011
On Fri, Feb 4, 2011 at 10:01 AM, Richard Loosemore <rpwl at lightlink.com> wrote:
> Any intelligent system must have motivations (drives, goals, etc) if it is
> to act intelligently in the real world. Those motivations are sometimes
> trivially simple, and sometimes they are not *explicitly* coded, but are
> embedded in the rest of the system ...... but either way there must be
> something that answers to the description of "motivation mechanism", or the
> system will sit there and do nothing at all. Whatever part of the AGI makes
> it organize its thoughts to some end, THAT is the motivation mechanism.
Richard, This is very clearly stated, and I agree with it 100%.
Motivation is a kind of meta-context that influences how intelligent
agents process everything. I think it remains to be seen whether we
can create intelligences that lack certain "undesirable" human
motivations without creating psychological monstrosities.
There are a number of interesting psychological monstrosities from the
science fiction genre. The one that occurs to me at the moment is from
the Star Trek Next Generation episode entitled "The Perfect Mate"
http://en.wikipedia.org/wiki/The_Perfect_Mate
Where a woman is genetically designed to bond with a man in a way
reminiscent to how birds bond to the first thing they see when they
hatch. The point being that when you start making some motivations
stronger than others, you can end up with very strange and
unpredictable results.
Of course, this happens in humans too. Snake charming Pentecostal
religions and suicide bombers come to mind amongst many others.
In our modern (and hopefully rational) minds, we see a lot of
motivations as being irrational, or dangerous. But are those
motivations also necessary to be human? It seems to me that one safety
precaution we would want to have is for the first generation of AGI to
see itself in some way as actually being human, or self identifying as
being very close to humans. If they see real human beings as their
"parents" that might be helpful to creating safer systems.
One of the key questions for me is just what belief systems are
desirable for AGIs. Should some be "raised" Muslim, Catholic, Atheist,
etc? What moral and ethical systems do we teach AGIs? All of the
systems? Some of them? Do we turn off the ones that don't "turn out
right". There are a lot of interesting questions here in my mind.
To duplicate as many human cultures in our descendants as we can, even
if they are not strictly biologically humans, seems like a good way to
insure that those cultures continue to flourish. Or, do we just create
all AGIs with a mono-culture? That seems like a big loss of richness.
On the other hand, differing cultures cause many conflicts.
-Kelly
More information about the extropy-chat
mailing list