[ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...]

Stefano Vaj stefano.vaj at gmail.com
Fri Feb 4 18:28:09 UTC 2011

On 4 February 2011 18:01, Richard Loosemore <rpwl at lightlink.com> wrote:
> Stefano Vaj wrote:
>> Under which definition of "intelligence"? A system can have arbitrary
>> degrees of intelligence without exhibiting any biological, let alone
>> human, trait at all. Unless of course intelligence is defined in
>> anthropomorphic terms. In which case we are just speaking of uploads
>> of actual humans, or of patchwork, artificial humans (perhaps at the
>> beginning of chimps...).
> Any intelligent system must have motivations (drives, goals, etc) if it is
> to act intelligently in the real world.  Those motivations are sometimes
> trivially simple, and sometimes they are not *explicitly* coded, but are
> embedded in the rest of the system ...... but either way there must be
> something that answers to the description of "motivation mechanism", or the
> system will sit there and do nothing at all. Whatever part of the AGI makes
> it organize its thoughts to some end, THAT is the motivation mechanism.

An intelligent system is simply a system that executes a program.

An amoeba, a cat or a human being basically executes a Darwinian
program (with plenty of spandrels thrown in by evolutionary history
and peculiar make of each of them, sure).

A PC, a cellular automaton or a Turing machine normally execute other
kinds of program, even though they may in principle be programmed to
execute Darwinian-like programs, behaviourally identical to that of

If they do (e.g., because they run an "uploaded" human identity) they
become Darwinian machines as well, and in that case they will be as
altruistic and as aggressive as their fitness maximisation will
command. That would be the point, wouldn't it?

If they do not, they may become ever more intelligent, but speaking of
their "motivations" in any sense which would not equally apply to a
contemporary Playstation or to an abacus does not really have any
sense, has it?

Stefano Vaj

More information about the extropy-chat mailing list