[ExI] Brain emulation, regions and AGI [WAS Re: Kelly's future]

Richard Loosemore rpwl at lightlink.com
Sat Jun 4 15:47:48 UTC 2011


Kelly Anderson wrote:
> On Sat, Jun 4, 2011 at 8:49 AM, Richard Loosemore <rpwl at lightlink.com> wrote:
>> Stefano Vaj wrote:
>>> On 3 June 2011 16:20, Richard Loosemore <rpwl at lightlink.com
>>> <mailto:rpwl at lightlink.com>> wrote:
>>>
>>>    Some of the other modules we would leave out entirely (the violent
>>>    ones).
>>>
>>> Why? How?
>> Well, for example, in the human design there is (appears to be) one module
>> that pushes the system to become dominant among the the other individuals of
>> the species.  It needs to be acknowledged as superior in some way.
>>
>> And there is another module that can cause the creature to enjoy perperating
>> acts of destruction or violence.
>>
>> It seems clear that these modules are not necessary to the proper
>> functioning of the system (evidence:  some people have extremely weak
>> versions of these modules).  It is also transparently obvious that these
>> would be dangerous in an AGI.  So, they go.
>>
>> As for your "how?" question:  they are simply left out of the AGI design.
>>  Nothing more to it than that.
> 
> I think this is all more a matter of training or ubringing if you
> will, rather than leaving some magickal module out. There are people
> who are wonderfully non-violent, but it's not because they don't have
> some physical module, but buecause they were taught that violence is
> bad, and it sunk in.
> 
> Admittedly Richard did not say if this module was a software or
> hardware module. And we probably differ a bit on how much is
> "programming" vs. "training".

There are two things going on:  (a) the presence of a module, and (b) 
the development of that module.

That distinction is, I believe, very important.  It is all too easy to 
start thinking that motivations or drives can just emerge from the 
functioning of a system (this assumption is rampant in all the 
discussions I have ever seen of AI motivation), but in truth that 
appears to be very unlikely.  No module, no drive.

So, in the case of humans what that means is that even someone who 
happens to be a non-violent person, as a result of upbringing or 
conscious decision, will almost certainly have that module in there, but 
it will be very weak.

When building an AGI, it is not my plan to include modules like that and 
then try to ensure that they stayed weak when the system was growing up 
.... that is not my idea of trying to guarantee friendliness!  Instead, 
we would leave the module out entirely.  As a result the system might be 
unable to understand the concept of "losing it" (i.e. outburst of 
anger), but that would not be much of a barrier to its understanding of us.

Bear in mind that part of my reason for talking in terms of modules is 
that I have in mind a specific way to implement motivation and drives in 
an AGI, and that particular approach is radically different than the 
"Goal Stack" approach that is assumed by most people to be the only way 
to do it.  One feature of that alternate approach is that it is 
relatively easy to have such modules.  (Although, having said that, it 
is still one of the most difficult aspects to implement).




Richard Loosemore



More information about the extropy-chat mailing list