[ExI] Watson on NOVA

Richard Loosemore rpwl at lightlink.com
Tue Feb 15 22:29:26 UTC 2011


Samantha Atkins wrote:
> On 02/15/2011 09:34 AM, Richard Loosemore wrote:
>> spike wrote:
>>
>> The problem is, Spike, that you (like many other people) speak of 
>> AI/AGI as if the things that it will want to do (its motivations) will 
>> only become apparent to us AFTER we build one.
>>
>> So, you say things like "It will decide it doesn't need us, or just 
>> sees no reason why we are useful for anything."
>>
>> This is fundamentally and devastatingly wrong.  You are basing your 
>> entire AGI worldview on a crazy piece of accidental black propaganda 
>> that came from science fiction.
> 
> If an AGI is an autonomous rational agent then the meaning of whatever 
> values are installed into it on creation will evolve and clarify over 
> time, particularly in how they should be applied to actual contexts it 
> will find itself in.  Are you saying  that simple proscription of some 
> actions is sufficient or that any human or group of humans can 
> sufficiently state the exact value[s] to be attained in a way that will 
> never ever in any circumstances forever lead to any unintended 
> consequences (the Genie problem)?   As an intelligent being don't you 
> wish the AGI to reflect deeply on the values it holds and their 
> relationship to one another?  Are you sure that in this reflection it 
> will never find some of the early programmed-in ones to be of of 
> questionable importance or weight?  Are you sure you would want that 
> powerful a mind to be incapable of such reflection?

There are assumptions about the motivation system implicit in your 
characterization of the situation.  I have previously described this set 
of assumptions as the "goal stack" motivation mechanism.

What you are referring to is the inherent instability of that mechanism. 
  All your points are valid, but only for that type of AGI.

My discussion, on the other hand, is predicated on a different type of 
motivation mechanism.

As well as being unstable, a goal stack would probably also never 
actually be an AGI.  It would be too stupid to be intelligent.  Another 
side effect of the goal stack.  As a result, not to be feared.



Richard Loosemore




More information about the extropy-chat mailing list