[ExI] Robust and beneficial AI

BillK pharos at gmail.com
Sun Jan 18 17:08:46 UTC 2015


On 18 January 2015 at 16:13, Anders Sandberg wrote:
> You are reading in WAY too much in that phrase.
> It is intended to motivate a mainstream audience, not to be the rule of the
> land.
>
> Sure, what we actually ought to aim for is AI that does what we would have
> wanted it to do if we actually knew what was going on in the world and
> ethics, and had given it superhumanly deep thought. But I'd rather have a
> sloppy but relatively simple to understand expression than an exact but
> endlessly debatable one in a document like the open letter. The news are
> slapping terminator pictures on it anyway - they do not care about the fine
> details of value learning or meta ethics. Leave that to the actual
> researchers.
>

I doubt if I am reading too much into that phrase. It is symbolic of a
really profound and difficult problem. (As you hint at, by saying it
would be difficult to define).

Do we let AI be independent and just give it 'goals', or do we tell AI
to keep asking permission from the owners every step of the way? The
old proverb ' You can't make an omlette without breaking eggs' implies
that if an AI is allowed to attempt to solve major human problems,
then many people are going to be greatly upset along the way. Usually
we say that 'the end does not justify the means' as that leads to many
horrors and the good end is never achieved. But to instruct an AI to
solve human problems without upsetting anyone, seems an impossible
task.

Charles Stross is quoted in a BBC interview, 2 Dec 2014, as saying
that he is not too worried about autonomous AI running amok. He is
more worried about the earlier AIs (that lack autonomy) doing what
their masters tell them.
Quote:
The AIs we were getting now and which were likely to appear in the
future might be dangerous, Stross said, but only because of the people
they served.
"Our biggest threat from AI, as I see it, comes from the
consciousnesses that set their goals," he said.
"Drones don't kill people - people who instruct drones to fly to grid
coordinates (X, Y) and unleash a Hellfire missile kill people," he
said. "It's the people who control them whose intentions must be
questioned.

"We're already living in the early days of the post-AI world, and we
haven't recognised that all AI is is a proxy for our own selves -
tools for thinking faster and more efficiently, but not necessarily
more benevolently," he said.
--------

BillK



More information about the extropy-chat mailing list