[ExI] uploads again

Anders Sandberg anders at aleph.se
Wed Dec 26 10:30:55 UTC 2012


On 2012-12-25 17:41, John Clark wrote:
> On Mon, Dec 24, 2012  Anders Sandberg <anders at aleph.se
> <mailto:anders at aleph.se>> wrote:
>
>          >> An end perhaps but not a ignominious end, I can't imagine a more
>         glorious way to go out. I mean 99% of all species that have ever
>         existed
>         are extinct, at least we'll have a descendent species.
>
>      > If our descendant species are doing something utterly value-less
>     *even to them*
>
> Values are only important to the mind that holds them and I don't
> believe any intelligence, artificial or not, can function without
> values, although what those values might be at any given point in the
> dynamic operation of a mind I can not predict.

Imagine that you had normal human values, except that you had also been 
programmed with an overriding value to erect big monuments to a past 
dead tyrant. You spend all your time doing this hard labor, recognizing 
that it is pointless (the tyrant and his people are all gone, nobody 
cares) and that it prevents you from doing the things you actually care 
for. Worse, the value also makes you unwilling to want to change that 
value: you know you would be far happier and freer if you did not have 
it, but you cannot ever do anything to change this state of affairs. It 
will even make you resist - with all your ingenuity - attempts to help 
you. Rather hellish, no?

Setting values of minds is a very weighty moral action, and should not 
be done willy-nilly. (Tell that to parents!)


> I think that's the central fallacy of the friendly AI idea, that if you
> just get the core motivation of the AI right you can get it to continue
> doing your bidding until the end of time regardless of how brilliant and
> powerful it becomes. I don't see how that could be.

The "doing your bidding" part went out of the window a long time ago in 
the discourse. The main question is whether it is possible to have a 
superintelligence around that is human-compatible, not human-subservient.

This is a very, very deep question in the intersection between computer 
science and philosophy.


>      > if our analysis is right, a rational AI would also want to follow it.
>
> Rationality can tell you what to do to accomplish what you want to do,
> but it can't tell you what you should want to do.

Yes. And if your values happen to be set badly, you will act badly. (or 
if your value update function is faulty)


>      > We show what rational agents with the same goals should do, and
>     it actually doesn't matter much if one is super-intelligent and the
>     others not
>
> Cows and humans rarely have the same long term goals and it's not
> obvious to me that the situation between a AI and a human would be
> different.

You are still mixing up two separate topics: AI safety and a paper about 
the unilateralist curse. The result is a fish.


-- 
Anders Sandberg
Future of Humanity Institute
Oxford University



More information about the extropy-chat mailing list