[ExI] Unfrendly AI is a mistaken idea.

Lee Corbin lcorbin at rawbw.com
Sun May 27 15:51:47 UTC 2007


Stathis had written

> > > Why not just have the happiness at no cost? You might say,
> > > because being an intelligent being has a certain je ne sais quoi,
> > > adding richness to the raw emotion of happiness. However,
> > > if you have complete access to your mind you will be able to 
> > > pin down this elusive quality and then give it to yourself directly...
> > 
[Lee]
> > Yes, heh, heh.  But that seems a strategy good only for the very
> > short term. Unless you mix in a tremendous urge to advance (which
> > not coincidentally most of us already possess), then you fail ultimately
> > by incredible orders of magnitude to obtain vastly greater satisfaction.
[Stathis]
> Why can't you get satisfaction without advancement? Unless your laziness
> had some detrimental effect on survival

You could get *some* satisfaction without advancement, and
by our lights today it would be quite a bit.  But it would be
pitifully miniscule compared to what you will get if you continue
to advance.

And yes, of course, laziness *could* impair survival, and so that
would be a second reason to demand advancement of yourself.

> you could probably get satisfaction equivalent to that of any given scenario
> directly.

Yeah, probably. A future AI might be able to grow only in the
"wire-head" direction, and neglect everything else. But who
knows?  To be really able to appreciate such growth, the AI
may have to be what we call conscious, only exceedingly so.

> A possible counterexample would be if the maximal amount
> of subjective satisfaction were proportional to the available
> computational resources, i.e., you could experience twice as
> much pleasure if you had twice as big a brain.

That seems reasonable to me, only instead of *twice*, I would
expect that exponentially more satisfaction is available for each
extra "neuron".

> This might lead AI's to consume the universe in order to
> convert it into computronium, and then fight it out amongst
> themselves.

Oh, exactly!  That has been my supposition from the beginning.
Not only will each AI want as much control over the universe
as it is able to achieve, it will use the matter it controls to help
it continually strive for ever more algorithm execution that directly
benefits it, and that certainly includes its own satisfaction and happiness.

> However, I don't think there is any clear relationship between
> brain size and intensity of emotion. 

None?  It seems to me that a designer would be hard pressed to
manage to have an ant be able to derive as much pleasure,
contentment, satisfaction, ecstacy, etc., as a human is able.
Every nuance of our own pleasure or happiness requires
some neuron firings, I believe.

> > Isn't that really what repels us about the image of a wirehead? No
> > progress? 
> 
> Yes, but it doesn't repel everyone. Heaven is a place of great pleasure
> and no progress, and lots of people would like to believe that it exists
> so that they can go there.

I had forgotten about that:  people generally have held and do hold
such beliefs. Well, such archaic beliefs will surely become more and
more rare, as progress continues to become more and more
obvious to people.

> The difference between Heaven and wirehead hedonism or drug
> addiction is that in Heaven God looks after you so that you don't
> starve to death or neglect your dependants. Retiring to eternal bliss
> in a big computer maintained by dedicated AI systems would be
> the posthuman equivalent of Heaven. 

Yes.

> > But besides, I happen to have a very strong *predilection*
> > for learning and finding truth. "To delight in understanding" 
> > has long been my maxim for what I ultimately wish for. So,
> > even though you're right and I've been motivated to feel that
> > way by genetic systems out of my direct control (so far), I
> > would still choose to go on getting my raw pleasure 
> > indirectly.
> 
> This sort of legacy thinking is the only hope for continuing progress
> into the indefinite future. There is no reason why you should be
> able to experience *less* pleasure if you assign it to something
> you consider worthwhile rather than to idleness, so why not do so?

Right!  So I used to think that advanced AIs would (a) study math
(since everything else will probably be soon exhausted), and (b)
study gratification enhancement, i.e., how to redesign their brains
(or their internal organization) to achieve more benefit.  But lately
I've been adding (c) perimeter maintenance or expansion, i.e.,
a kind of warfare in which each tries to maximize its control of
resources either at the expense of its neighbors, or working 
together with them, or expanding into free space.

> Moreover, there would be less reason to try to gain pleasure
> or satisfaction by doing something bad if you could as easily
> get the same reward by doing something good or doing nothing.

Yes.

> The majority of people who deliberately hurt others do so
> because they don't consider the badness of their action to
> outweigh their desire for the expected reward. 

Yes.  But as soon as we have formal control over our emotions,
why do something that everyone will condemn when you can
be a saint and get such as much pleasure.  Or, in the future,
either abiding by the laws or now, grow by expanding your
control over resources so that you get a bigger and bigger 
brain in effect.

Lee




More information about the extropy-chat mailing list