[ExI] AI motivation, was malevolent machines (Anders Sandberg)

Anders Sandberg anders at aleph.se
Fri Apr 11 18:52:02 UTC 2014

Keith Henson <hkeithhenson at gmail.com> , 11/4/2014 6:01 AM:
On Thu, Apr 10, 2014 at 6:18 PM,  Anders Sandberg <anders at aleph.se> wrote: 
> Any way of doing a formal analysis of it? We know human status gaming can be pretty destructive.? 
That's a really good question.  I don't know. 
>From what I see of destructive, zero sum, or negative sum status 
games, they seem to stem from poor intelligence or poor understanding 
of the object of the game.  Presumably an AI would be smart enough to 
play the game well. 
We should discuss this, either over Skype, or the next time I get to Oxford.
My concern is that just like in the moral/value case most friendliness research has been thinking about, getting complex and fragile human social status concepts into the machine seems to be hard. And if we miss what we really mean by social status we might get powerful systems playing a zero-sum game with arbitrary markers that just *sound* like they are social status as we know it. In such a situation humans might be hopelessly outclassed and the desired integration of machine and human society never happens. So the issue to think about is how to make sure the concepts actually mesh. And that playing the game doesn't lead to pathologies even when you are smart: we know the moral game can get crazy for certain superintelligences. 
Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140411/82ac7d28/attachment.html>

More information about the extropy-chat mailing list