[ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...]

Richard Loosemore rpwl at lightlink.com
Fri Feb 4 20:29:14 UTC 2011


Stefano Vaj wrote:
> On 4 February 2011 18:01, Richard Loosemore <rpwl at lightlink.com> wrote:
>> Stefano Vaj wrote:
>>> Under which definition of "intelligence"? A system can have arbitrary
>>> degrees of intelligence without exhibiting any biological, let alone
>>> human, trait at all. Unless of course intelligence is defined in
>>> anthropomorphic terms. In which case we are just speaking of uploads
>>> of actual humans, or of patchwork, artificial humans (perhaps at the
>>> beginning of chimps...).
>> Any intelligent system must have motivations (drives, goals, etc) if it is
>> to act intelligently in the real world.  Those motivations are sometimes
>> trivially simple, and sometimes they are not *explicitly* coded, but are
>> embedded in the rest of the system ...... but either way there must be
>> something that answers to the description of "motivation mechanism", or the
>> system will sit there and do nothing at all. Whatever part of the AGI makes
>> it organize its thoughts to some end, THAT is the motivation mechanism.
> 
> An intelligent system is simply a system that executes a program.

Wrong.

I'm sorry, but that is a gross distortion of the normal usage of 
"intelligent".

It does not follow that because a executes a program, therefore it is 
intelligent.

> An amoeba, a cat or a human being basically executes a Darwinian
> program (with plenty of spandrels thrown in by evolutionary history
> and peculiar make of each of them, sure).

If what you mean to say here is that cats, amoebae and humans execute 
programs DESIGNED by darwinian evolution, then this is true, but 
irrelevant:  how the program got here is of no consequence to the 
question of how the program is actually working today.

There is nothing "darwinian" about the human cognitive system:  you are 
confusing two things:

   (a)  The PROCESS of construction of a system, and

   (b)  The FUNCTIONING of a particular system that went through that
        process of construction

> A PC, a cellular automaton or a Turing machine normally execute other
> kinds of program, even though they may in principle be programmed to
> execute Darwinian-like programs, behaviourally identical to that of
> organisms.

True, except for the reference to "darwinian-time programs", which is 
meaningless.

A human cognitive system can be implemented in a PC, a cellular 
automaton or a Turing machine, without regard to whatever darwinian 
processes originally led to the design of the original form of the human 
cognitive system.


> If they do (e.g., because they run an "uploaded" human identity) they
> become Darwinian machines as well, and in that case they will be as
> altruistic and as aggressive as their fitness maximisation will
> command. That would be the point, wouldn't it?

A human-like cognitive system running on a computer has nothing whatever 
to do with darwinian evolution.  It is not a "darwinian machine" because 
that phrase "darwinian machine" is semantically empty.  There is no such 
property "darwinian" that can be used here, except the trivial property

"Darwinian" ==  "System that resembles, in structure, another system
                  that was originally designed by a darwinian process"

That definition is trivial because nothing follows from it.

It is a distinction without a difference.

More importantly, perhaps, an uploaded human identity is only ONE way to 
build a human-like cognitive system in a computer.  It has no relevance 
to the original issue here, because I was never talking about uploading, 
only about the mechanisms, and the use of artificial mechanisms of the 
same design.

That is, using PART of the design of the human motivation mechanism.

> If they do not, they may become ever more intelligent, but speaking of
> their "motivations" in any sense which would not equally apply to a
> contemporary Playstation or to an abacus does not really have any
> sense, has it?


Quite the contrary, it would make perfect sense.

Their motivations are defined by functional components.  If the 
functionality of the motivation mechanism in an AGI resembled the 
functionality of a human motivation mechanism, what else is there to 
say?  They will both behave in a way that can properly be described in 
motivational terms.

Motivations do not emerge, at random, from the functioning of an AGI, 
they have to be designed into the system at the outset.

There is a mechanism in there, responsible for the motivations of the 
system.  All I am doing is talking about the design and performance of 
that mechanism.



Richard Loosemore








More information about the extropy-chat mailing list