[ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...]

Richard Loosemore rpwl at lightlink.com
Fri Feb 4 17:01:17 UTC 2011


Stefano Vaj wrote:
> On 3 February 2011 20:20, Richard Loosemore <rpwl at lightlink.com> wrote:
>> Stefano Vaj wrote:
>>> Am I the only one finding all that a terribly naive projection?
>> I fail to understand.  I am talking about mechanisms.   What projections are
>> you talking about?
> 
> "Altruism", "empathy", "aggressive"... What do we exactly mean when we
> say than a car is aggressive or altruistic?
> 
>> Wait!  There is nothing metaphorical about this.  I am not a poet, I am a
>> cognitive scientist ;-).  I am describing the mechanisms that are (probably)
>> at the root of your cognitive system.  Mechanisms that may be the only way
>> to drive a full-up intelligence in a stable manner.
> 
> Under which definition of "intelligence"? A system can have arbitrary
> degrees of intelligence without exhibiting any biological, let alone
> human, trait at all. Unless of course intelligence is defined in
> anthropomorphic terms. In which case we are just speaking of uploads
> of actual humans, or of patchwork, artificial humans (perhaps at the
> beginning of chimps...).

Any intelligent system must have motivations (drives, goals, etc) if it 
is to act intelligently in the real world.  Those motivations are 
sometimes trivially simple, and sometimes they are not *explicitly* 
coded, but are embedded in the rest of the system ...... but either way 
there must be something that answers to the description of "motivation 
mechanism", or the system will sit there and do nothing at all. 
Whatever part of the AGI makes it organize its thoughts to some end, 
THAT is the motivation mechanism.

Generally speaking, in an AGI the motivation mechanism can take many, 
many forms, obviously.

In a human cognitive system, by contrast, we understand that it takes a 
particular form (probably the modules I talked about).

The problem with your criticism of my text is that you are mixing up 
claims that I make about:

    (a) Human motivation mechanisms,
    (b) AGI motivation mechanisms in general, and
    (c) The motivation mechanisms in an AGI that is designed to resemble
        the human motivational design.

So, your comment "What do we exactly mean when we say than a car is 
aggressive or altruistic?" has nothing to do with anything, since I made 
no claim that a car has a motivation mechanism, or an aggression module.

The rest of your text simply does not address the points I was making, 
but goes off in other directions that I do not have the time to address.



>> Thus, a human-like motivation system can be given aggression modules, and no
>> empathy module.  Result: psychopath.
> 
> This is quite debatable indeed even for human "psychopathy", which is
> a less than objective and universal concept...
> 
> Different motivation sets may be better or worse adapted depending on
> the circumstances, the cultural context and one's perspective.
> 
> Ultimately, it is just Darwinian whispers all the way down, and if you
> are looking for biological-like behavioural traits you need either to
> evolve them with time in an appropriate emulation of an ecosystem
> based on replication/mutation/selection, or to emulate them directly.
> 
> In both scenarios, we cannot expect in this respect any convincing
> emulation of a biological organism to behave any differently (and/or
> be controlled by different motivations) in this respect than... any
> actual organism.
> 
> Otherwise, you can go on developing increasingly intelligent systems
> that are not more empathic or aggressive than a cellular automaton. an
> abacus, a PC or a car. All entities which we can *already* define as
> beneficial or detrimental to any set of values we choose to adhere
> without too much "personification".

This has nothing to do with adaptation!  Completely irrelevant.

And your comments about "emulation" are wildly inaccurate:  we are not 
"forced" to emulate the exact behavior of living organisms.  That simply 
does not follow!

I cannot address the rest of these comments, because I no longer see any 
coherent argument here, sorry.


Richard Loosemore




More information about the extropy-chat mailing list