[ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...]

Stefano Vaj stefano.vaj at gmail.com
Fri Feb 4 15:36:34 UTC 2011


On 3 February 2011 20:20, Richard Loosemore <rpwl at lightlink.com> wrote:
> Stefano Vaj wrote:
>> Am I the only one finding all that a terribly naive projection?
> I fail to understand.  I am talking about mechanisms.   What projections are
> you talking about?

"Altruism", "empathy", "aggressive"... What do we exactly mean when we
say than a car is aggressive or altruistic?

> Wait!  There is nothing metaphorical about this.  I am not a poet, I am a
> cognitive scientist ;-).  I am describing the mechanisms that are (probably)
> at the root of your cognitive system.  Mechanisms that may be the only way
> to drive a full-up intelligence in a stable manner.

Under which definition of "intelligence"? A system can have arbitrary
degrees of intelligence without exhibiting any biological, let alone
human, trait at all. Unless of course intelligence is defined in
anthropomorphic terms. In which case we are just speaking of uploads
of actual humans, or of patchwork, artificial humans (perhaps at the
beginning of chimps...).

> Thus, a human-like motivation system can be given aggression modules, and no
> empathy module.  Result: psychopath.

This is quite debatable indeed even for human "psychopathy", which is
a less than objective and universal concept...

Different motivation sets may be better or worse adapted depending on
the circumstances, the cultural context and one's perspective.

Ultimately, it is just Darwinian whispers all the way down, and if you
are looking for biological-like behavioural traits you need either to
evolve them with time in an appropriate emulation of an ecosystem
based on replication/mutation/selection, or to emulate them directly.

In both scenarios, we cannot expect in this respect any convincing
emulation of a biological organism to behave any differently (and/or
be controlled by different motivations) in this respect than... any
actual organism.

Otherwise, you can go on developing increasingly intelligent systems
that are not more empathic or aggressive than a cellular automaton. an
abacus, a PC or a car. All entities which we can *already* define as
beneficial or detrimental to any set of values we choose to adhere
without too much "personification".

-- 
Stefano Vaj




More information about the extropy-chat mailing list