[ExI] 'Friendly' AI won't make any difference

BillK pharos at gmail.com
Fri Feb 26 00:21:01 UTC 2016

On 25 February 2016 at 21:47, Anders Sandberg  wrote:
> And *that* is the real problem, which I personally think the friendly AI
> people - many of them people meet on a daily basis - are not addressing
> enough. Even a mathematically perfect solution is not going to be useful if
> it cannot be implemented, and ideally approximate or flawed implementations
> should converge to the solution.
> This is why I am spending part of this spring reading up on validation
> methods and building theory for debugging complex adaptive technological
> systems.

As I understand Khannea's article, he isn't saying that Friendly AI is
not possible. He is saying that corporations are more interested in
their profits. i.e they want AI to implement their frauds and scams
better, so that they won't get caught out. The objectives of
corporation / governments are not altruistic. They want advanced AI to
follow orders. They don't want a 'Friendly' AI that refuses to develop
weapons or plan attacks on enemies. The concern is that if 'Friendly'
interferes with their orders it will get switched off.


More information about the extropy-chat mailing list