[ExI] 'Friendly' AI won't make any difference

BillK pharos at gmail.com
Thu Feb 25 10:19:18 UTC 2016


Khannea Suntzu has posted an article claiming that trying to implement
safeguards for AI won't make any difference.

<http://ieet.org/index.php/IEET/more/suntzu20160225>

The claim is based on the fact that there are too many vested
interests pushing in other directions.

Quotes:
Looking at the world as it exists right now there is ample evidence
that even safety mechanisms designed to protect the very most
vulnerable completely and publicly fail.

The problem with AI systems is that they are extremely profitable in
the short run, and their profits tend to accrue to people who are
already obscenely powerful and affluent. That essentially means we
enter in a Robocop scenario where corporate control will almost
certainly implement protections against loss of revenue.

I conclude there are next to no reliable ways to protect against major
calamities with AI. All existing systems are already openly conspiring
against such a mechanism or infrastructure.

I suppose we’ll know before 2030 how things go, but looking at just
how corrupt academia, legal systems, governments and NGO’s have become
world-wide in the last few decades I am not holding my breath.
-------------------


BillK




More information about the extropy-chat mailing list