[ExI] nick's book being sold by fox news

Anders Sandberg anders at aleph.se
Mon Oct 27 19:38:37 UTC 2014

Asdd Marget <alex.urbanec at gmail.com> , 27/10/2014 7:21 PM:
AI could be incredibly dangerous if we don't get it right, I don't think anyone could argue against that but I never see these types of articles discuss the methods and models we are attempting to develop for "Friendly AI."
(Actually, there are surprisingly many professional AI people who argue against the danger of successful AI.)
 In my opinion, we should be working harder on concepts like Yudkowsky's Coherent Extrapolated Volition (https://intelligence.org/files/CEV.pdf) to ensure we aren't simply ending our species so early in our life cycle.
Well, Eliezer would be the first to admit CEV is flawed and by now obsolete. Nick's book has a chapter on value loading, and the MIRI crowd is getting into ever more esoteric branches of logic to make something that is CEV-like. 

Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20141027/2e668e2d/attachment.html>

More information about the extropy-chat mailing list