[ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...]

Samantha Atkins sjatkins at mac.com
Mon Feb 7 17:44:54 UTC 2011


On Feb 7, 2011, at 7:40 AM, Stefano Vaj wrote:

> On 5 February 2011 17:23, Richard Loosemore <rpwl at lightlink.com> wrote:
>> So, to be fair, I will admit that the distinction between  "How did this
>> machine come to get built?"  and  "How does this machine actually work, now
>> that it is built?" becomes rather less clear when we are talking about
>> concept learning (because concepts play a role that fits somewhere between
>> structure and content).
> 
> How a machine is built is immaterial to my argument. For a darwinian
> program I refer to one the purpose to which is, very roughly,
> fitness-maxisiming.

So you are calling any/all goal seeking algorithms and anything running them "darwinian"?  That is a bit broad.  Instead of "darwinian" which has become quite a package deal of concepts and assumptions, perhaps use "genetic algorithm based" when that is what you mean?    

All goal-seeking is not a GA.  A genetic algorithm requires a fitness function/measure of success, some means of variation, and a means of preserving those instances and traits that are better by the fitness function possibly with some means of combination of more promising candidates.  

> 
> Any such program may be the "natural" product of the mechanism
> "heritance/mutation/selection" along time, or can be emulated by
> design. In such case, empathy, aggression, flight, selfishness etc.
> have a rather literal sense in that they are aspects of the
> reproductive strategy of the individual concerned, and/or of the
> replicators he carries around.
> 

Here you seem to be mixing in things like reproduction and more anthropomorphic elements that are quite specific to a small subset of GAs.  So you seem to have started with too broad a use of "darwinian" and then from that assume things true of a much smaller subset of things actually "darwinian".


> For anything which is not biological, or designed to emulate
> deliberately the Darwinian *functioning* of biological system, *no
> matter how intelligent they are*, I contend that aggression or
> altruism are as applicable only inasmuch they are to ordinary PCs or
> other universal computing devices.

That would not at all follow.  Anything that wishes to preserve itself and defines the good as that which furthers its interests and which had enough freedom of action would likely exhibit some of these behaviors.  And it has little to do with "darwinian" per se.

- s



More information about the extropy-chat mailing list