[ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...]

Stefano Vaj stefano.vaj at gmail.com
Thu Feb 3 18:19:06 UTC 2011


On 2 February 2011 17:40, Richard Loosemore <rpwl at lightlink.com> wrote:
> The problem with humans is that they have several modules in the motivation
> system, some of them altruistic and empathic and some of them selfish or
> aggressive.   The nastier ones were built by evolution because she needed to
> develop a species that would fight its way to the top of the heap.  But an
> AGI would not need those nastier motivation mechanisms.

Am I the only one finding all that a terribly naive projection?

Either we deliberately program an AGI to emulate evolution-driven
"motivations", and we end up with either an uploaded (or a
patchwork/artificial) human or animal or vegetal individual - where it
might make some metaphorical sense to speak of "altruism" or
"selfishness" as we do with existing organisms in sociobiological
terms -; or we do not do anything like that, and in that case our AGI
is neither more nor less saint or evil than my PC or Wolfram's
cellular automata, no matter what its intelligence may be.

We need not detract anything. In principle I do not see why an AGI
should be  any less absolutely "indifferent" to the results of its
action than any other program in execution today...

-- 
Stefano Vaj




More information about the extropy-chat mailing list