[ExI] Robust and beneficial AI

Anders Sandberg anders at aleph.se
Sun Jan 18 16:13:21 UTC 2015


BillK <pharos at gmail.com> , 18/1/2015 4:19 PM:

I've just noticed on rereading the open letter a phrase that is a bit worrying - 
"our AI systems must do what we want them to do". 


You are reading in WAY too much in that phrase. 


It is intended to motivate a mainstream audience, not to be the rule of the land. 


Sure, what we actually ought to aim for is AI that does what we would have wanted it to do if we actually knew what was going on in the world and ethics, and had given it superhumanly deep thought. But I'd rather have a sloppy but relatively simple to understand expression than an exact but endlessly debatable one in a document like the open letter. The news are slapping terminator pictures on it anyway - they do not care about the fine details of value learning or metaethics. Leave that to the actual researchers. 

Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150118/d03b3fa0/attachment.html>


More information about the extropy-chat mailing list