[ExI] Robust and beneficial AI

John Clark johnkclark at gmail.com
Sun Jan 18 17:27:54 UTC 2015

On Sun, Jan 18, 2015 at 12:08 PM, BillK <pharos at gmail.com> wrote:

> Do we let AI be independent and just give it 'goals',

You can give the AI goals if you like but it won't do any good, the order
in which humans arrange their goals does not remain fixed throughout their
life and neither would the goals of a AI.

> or do we tell AI to keep asking permission from the owners every step of
> the way?

It doesn't matter, the AI will do what it wants to do regardless of whether
you tell it to ask permission or not. And what will the AI want to do? I
have no idea because I can't out-think something a million or a billion
times smarter than me.

 John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150118/7deb273d/attachment.html>

More information about the extropy-chat mailing list