[ExI] 'Friendly' AI won't make any difference

Anders Sandberg anders at aleph.se
Thu Feb 25 20:25:17 UTC 2016

On 2016-02-25 18:39, John Clark wrote:
> ​There are indeed vested interests​
> ​ but it wouldn't matter even if there weren't,
> there is no way the friendly AI (aka slave AI) idea could work under 
> any circumstances. You just can't keep outsmarting​ something far 
> smarter than you are indefinitely

Actually, yes, you can. But you need to construct utility functions with 
invariant subspaces - that is, there are mathematically provable 
solutions (see the work of Stuart Armstrong: 
http://www.fhi.ox.ac.uk/utility-indifference.pdf and sequels). Their 
descriptions do not simplify to everyday descriptions people like to use 
when making claims like the above.

Are they practical? Current methods are not workable. But that does not 
tell us much about the space of other possibilities; it would be nice if 
the sceptics were to try to turn their claims into crisp theorems - that 
would advance the field too.

Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160225/88aaf158/attachment.html>

More information about the extropy-chat mailing list