[ExI] AGI Motivation revisited [WAS Re: Isn't Bostrom seriously ...]
stefano.vaj at gmail.com
Tue Jun 21 11:10:40 UTC 2011
On 17 June 2011 17:28, Richard Loosemore <rpwl at lightlink.com> wrote:
> If I had time I would extend this argument: the basic conclusion is that
> in order to get a really smart AGI you will need the alternate type of
> motivation system I alluded to above, and in that case the easiest thing to
> do is to create a system that is empathic to the human race .... you would
> have to go to immense trouble, over an extended period of time, with many
> people working on the project, to build something that was psychotic and
> smart, and I find that scenario quite implausible.
It is not entirely clear to me what you think of the motivations of
contemporary PCs, but I think you can have arbitrarily powerful and
intelligent computers with exactly the same motivations. According to the
Principle of Computational Equivalence, beyond a very low threshold of
complexity, there is nothing more to "intrinsic!" intelligence than
As to Turing-passing beings, that is beings which can be performant or not
in the task but can behaviourally emulate specific or generic human beings,
you may have a point that either they do it, and as a consequence cannot be
either better or worse than what the emulate, or they do not (and in that
event will not be recognisable as "intelligent" in any anthropomorphic
As to empathy to the "human race" (!), I personally do no really feel
anything like that, but I do not consider myself more psychotic than
average, so I am not inclined to consider seriously any such rhetoric.
Sure, you may well hard-code in a computer behaviours aimed at protecting
such a dubious entity, and if this work to operate the power grid you will
end up without electricity the first time you have to perform an abortion.
Do we really need that?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat