[ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...]
Stefano Vaj
stefano.vaj at gmail.com
Mon Feb 7 15:40:00 UTC 2011
On 5 February 2011 17:23, Richard Loosemore <rpwl at lightlink.com> wrote:
> So, to be fair, I will admit that the distinction between "How did this
> machine come to get built?" and "How does this machine actually work, now
> that it is built?" becomes rather less clear when we are talking about
> concept learning (because concepts play a role that fits somewhere between
> structure and content).
How a machine is built is immaterial to my argument. For a darwinian
program I refer to one the purpose to which is, very roughly,
fitness-maxisiming.
Any such program may be the "natural" product of the mechanism
"heritance/mutation/selection" along time, or can be emulated by
design. In such case, empathy, aggression, flight, selfishness etc.
have a rather literal sense in that they are aspects of the
reproductive strategy of the individual concerned, and/or of the
replicators he carries around.
For anything which is not biological, or designed to emulate
deliberately the Darwinian *functioning* of biological system, *no
matter how intelligent they are*, I contend that aggression or
altruism are as applicable only inasmuch they are to ordinary PCs or
other universal computing devices.
If, on the other hand, AGIs are programmed to execute Darwinian
programs, obviously they would be inclined to adopt the mix of
behaviours which is best in Darwinian terms for their "genes", unless
of course the emulation is flawed. What else is new?
In fact, I maintain that they would be hardly discernible in
behavioural terms from a computer with an actual human brain inside.
--
Stefano Vaj
More information about the extropy-chat
mailing list