[ExI] Robust and beneficial AI

BillK pharos at gmail.com
Tue Jan 13 13:14:22 UTC 2015


On 13 January 2015 at 12:17, Anders Sandberg wrote:
<snip>
> The FLI idea is not so much as establishing a one true list of What Must Be
> Done In AI, but try to reorder the priorities of people in the field based
> on some actual thinking about first-, second-, and higher order issues. The
> fact that safety for a long time has not even been regarded as a research
> priority at all should tell us something about how bad the priorities used
> to be.
>
>

The main problem with AI to date is that nobody really knows what the
breakthrough path will be.
Incremental improvements on existing automation is what is happening.

The internet grew like Topsy, with little thought given to security.
And we got a mixture of good and bad. A lot of wild developments,
Facebook, Twitter, the cloud, discussion groups, Google, Amazon, etc.
And a lot of hackers, criminality, identity theft, etc. Now that
regulation is appearing, with governments deciding they have to do
'something', the web is becoming a tool of government for control and
spying, and a tool of corporations to sell stuff and manipulate
people.

Regulation of AI development is likely to go the same way in our
present society. Once AI appears there will be government and
corporate AIs that far surpass private 'assistants' that people can
get.

'Letting a thousand flowers bloom' gives a chance that governments and
corporates will not overwhelm individuals. But, of course, then there
is the risk that AIs may be used by criminals and terrorists. Or that
a rogue AI may run wild and cause much damage. (We already have stock
market flash crashes when algos run wild).

Looks like a messy future!

BillK



More information about the extropy-chat mailing list