[ExI] Robust and beneficial AI

Mike Dougherty msd001 at gmail.com
Sun Jan 18 15:28:00 UTC 2015

On Jan 18, 2015 10:17 AM, "BillK" <pharos at gmail.com> wrote:

> I've just noticed on rereading the open letter a phrase that is a bit
worrying -
> "our AI systems must do what we want them to do".
> To me that means that there is no chance that they intend to let AI
> solve the problems of humanity.
> That statement is the sort of thing the director of the NSA or any
> dictator anywhere would say.

> No, 'AI doing what we want them to do' makes AI into a weapon for the
> owners giving instructions.

Given the well-publicized fear of Hawking et al * I imagine the intent of
"it does what we want" is to allay concerns that AI is going to pursue
anti-human goals.  It seems like your fear is that it pursues very-human
goals,  especially when you are excluded from the goal-setting
discussions.  My fear is that successful work is being done by researchers
who aren't talking to anyone about their work... such that all these other
fears may be real but we don't know their status.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150118/d0f705ce/attachment.html>

More information about the extropy-chat mailing list