[ExI] AGI Motivation revisited [WAS Re: Isn't Bostrom seriously ...]

Richard Loosemore rpwl at lightlink.com
Wed Jun 22 19:11:33 UTC 2011


Stefano Vaj wrote:
> On 17 June 2011 17:28, Richard Loosemore <rpwl at lightlink.com 
> <mailto:rpwl at lightlink.com>> wrote:
> 
>     If I had time I would extend this argument:  the basic conclusion is
>     that in order to get a really smart AGI you will need the alternate
>     type of motivation system I alluded to above, and in that case the
>     easiest thing to do is to create a system that is empathic to the
>     human race .... you would have to go to immense trouble, over an
>     extended period of time, with many people working on the project, to
>     build something that was psychotic and smart, and I find that
>     scenario quite implausible.
> 
> 
> It is not entirely clear to me what you think of the motivations of 
> contemporary PCs, but I think you can have arbitrarily powerful and 
> intelligent computers with exactly the same motivations. According to 
> the Principle of Computational Equivalence, beyond a very low threshold 
> of complexity,  there is nothing more to "intrinsic!" intelligence than 
> performance.

A "motivation mechanism" is something that an ordinary PC does not even 
have, so I cannot for the life of me make sense of your first sentence.

A PC is roughly equivalent to a spinal column, in that its "motivation" 
is only a set of reflex actions (it responds to specific pre-programmed 
triggers).  In effect, there is no motivation mechanism whatsoever, 
because this is too trivial to deserve to be labeled that way.

You then mention the Principle of Computational Equivalence -  you mean 
Wolfram's definition?  I am extremely familiar with this idea, but it 
does not (as I understand it) have the implication that "beyond a very 
low threshold of complexity,  there is nothing more to "intrinsic!" 
intelligence than performance."  Or, if it does have that implication, 
it is meant in a way that has no bearing on the question of motivation.

So I am twice puzzled by what you say.


> As to Turing-passing beings, that is beings which can be performant or 
> not in the task but can behaviourally emulate specific or generic human 
> beings, you may have a point that either they do it, and as a 
> consequence cannot be either better or worse than what the emulate, or 
> they do not  (and in that event will not be recognisable as 
> "intelligent" in any anthropomorphic sense).
> 
> As to empathy to the "human race" (!), I personally do no really feel 
> anything like that, but I do not consider myself more psychotic than 
> average, so I am not inclined to consider seriously any such rhetoric.

Rhetoric?  It is not rhetoric.  If you are not psychotic (and I have no 
reason to believe that you are), then you already have some empathy for 
your species, whether you are introspectively aware of it or not.



> Sure, you may well hard-code in a computer behaviours aimed at 
> protecting such a dubious entity, and if this work to operate the power 
> grid you will end up without electricity the first time you have to 
> perform an abortion. Do we really need that?

What?!  I am sorry, but you will have to clarify your train of thought 
for me, because I can make no sense of this.


Richard Loosemore



More information about the extropy-chat mailing list