[ExI] AI motivation, was malevolent machines

Keith Henson hkeithhenson at gmail.com
Sat Apr 12 20:48:54 UTC 2014


On Sat, Apr 12, 2014 at 2:08 AM,  Anders Sandberg <anders at aleph.se> wrote:

snip

> My concern is that just like in the moral/value case most friendliness research has been thinking about, getting complex and fragile human social status concepts into the machine seems to be hard.

It might not be as hard as it seems or it could be even harder.

Humans seem to be mostly blind to their motivations.  It may be that
being blind to your motivations has been more effective in
reproductive success in the past, though I really don't know.  I do
know that being aware of your own motivations can get you an awful lot
of social flack at least if you talk about it.  I didn't come to
understanding my own motivations through introspection, I came to them
by understanding the evolution of motives (such as seeking social
status) in social primates.  Being an unexceptional social primate, I
wrote about this applied this to myself--with disastrous results.

In the intervening 15 years, seeking social status as a human
motivation has become relatively accepted, though it's probably still
something you don't want to talk about self-referentially.

It's a good question if understanding your own motives might make you
more successful.  Personally, I can't say it has, but then how can you
make a rational judgment?

> And if we miss what we really mean by social status we might get powerful systems playing a zero-sum game with arbitrary markers that just *sound* like they are social status as we know it. In such a situation humans might be hopelessly outclassed and the desired integration of machine and human society never happens.

Playing a zero-sum game, arbitrary markers or not, is an overall loss.
 High social status is a fairly tricky concept today.  It was
originally selected due to reproductive success, especially in males.
That seems unlikely in AIs, but then again, perhaps we should breed
AIs based on their obtaining high social status.

> So the issue to think about is how to make sure the concepts actually mesh. And that playing the game doesn't lead to pathologies even when you are smart: we know the moral game can get crazy for certain superintelligences.?

An interesting point that might help understanding is *why* we are
mostly not conscious of our motives.  Even if I am aware that I must
have this motivation for status seeking, it's an abstract intellectual
awareness, not a reason to get up in the morning.  There must be some
reproductive success element in not being aware of our own
motivations.  Perhaps we need to hide them even from the rest of our
minds to keep them from being too obvious to other social primates.

It's going to complicate attempts to design AI "in our own image" when
we are blind to some parts of that image.

Keith




More information about the extropy-chat mailing list