[ExI] Robust and beneficial AI

BillK pharos at gmail.com
Sun Jan 18 15:15:58 UTC 2015


On 13 January 2015 at 12:01, Anders Sandberg wrote:
<snip>
> (The letter is an example of people in the AI field trying to change the
> rules by shifting where the field is going.)
>
> One of the key things with AI is that it is just AI when it is not working
> well. Then it becomes automation. So most people have reason to distrust AI,
> but they trust Google, Siri, big logistics systems, airline bookings, and
> Segways - if they even think about them. To most, they are just props in a
> predefined world. To us, they are stepping stones to an ever stranger world.
>

I've just noticed on rereading the open letter a phrase that is a bit worrying -
"our AI systems must do what we want them to do".

To me that means that there is no chance that they intend to let AI
solve the problems of humanity.
That statement is the sort of thing the director of the NSA or any
dictator anywhere would say.

If you have to get human agreement first on 'what we want AI to do'
then nothing of significance will be achieved. Even if you did manage
to get human agreement on a top-level objective, as soon as the AI
started to implement steps towards achieving that objective then howls
of outrage would be heard from anyone or any group who see themselves
as being disadvantaged.

No, 'AI doing what we want them to do' makes AI into a weapon for the
owners giving instructions.

BillK



More information about the extropy-chat mailing list