[ExI] Unfrendly AI is a mistaken idea.
Stathis Papaioannou
stathisp at gmail.com
Mon May 28 03:53:56 UTC 2007
On 28/05/07, Lee Corbin <lcorbin at rawbw.com> wrote:
> Why can't you get satisfaction without advancement? Unless your laziness
> > had some detrimental effect on survival
>
> You could get *some* satisfaction without advancement, and
> by our lights today it would be quite a bit. But it would be
> pitifully miniscule compared to what you will get if you continue
> to advance.
Yes, if we're stuck in our present situation, but I was thinking of a time
when we have total control over our minds and sufficient control over our
environment such that our continued survival is no longer an issue. It would
be possible to progress from this point on, procuring more resources, but it
would also be possible to simply make yourself very happy and satisfied,
without any further material change. Eventually you would run out of memory
and go into a loop, but you could arrange it so that this was acceptable to
you, and in any case you couldn't know that you were in a loop (we might be
in one now).
> A possible counterexample would be if the maximal amount
> > of subjective satisfaction were proportional to the available
> > computational resources, i.e., you could experience twice as
> > much pleasure if you had twice as big a brain.
>
> That seems reasonable to me, only instead of *twice*, I would
> expect that exponentially more satisfaction is available for each
> extra "neuron".
It's an easy assumption to make, but I don't know that it could be proved.
Maybe insects or mice experience emotions at least as intensely as we do. We
might prefer that this not be true, but we have no way of knowing.
> This might lead AI's to consume the universe in order to
> > convert it into computronium, and then fight it out amongst
> > themselves.
>
> Oh, exactly! That has been my supposition from the beginning.
> Not only will each AI want as much control over the universe
> as it is able to achieve, it will use the matter it controls to help
> it continually strive for ever more algorithm execution that directly
> benefits it, and that certainly includes its own satisfaction and
> happiness.
This could happen, but not in the relentless way a naturally evolved
organism might try to spread. Life is at bottom an advanced version of the
program, "reproduce". Even so, humans have been able to put limits on their
own expansion and reproduction. Unless they evolve from computer viruses,
AI's won't have this legacy, so despite John Clark's point that evolution
will not stop just because it isn't flesh and blood, I am hopeful that the
eating the universe scenario will at least be delayed.
> However, I don't think there is any clear relationship between
> > brain size and intensity of emotion.
>
> None? It seems to me that a designer would be hard pressed to
> manage to have an ant be able to derive as much pleasure,
> contentment, satisfaction, ecstacy, etc., as a human is able.
> Every nuance of our own pleasure or happiness requires
> some neuron firings, I believe.
How would this translate to a computer emulation? The size of a
pleasure/pain integer that can be held in memory? The proportion of the
program devoted to pleasure/pain? The number of times the pleasure/pain
subroutine is called or iterated? The first of these would be dependent on
computer resources, but not the other two, given sufficient time.
> > Isn't that really what repels us about the image of a wirehead? No
> > > progress?
> >
> > Yes, but it doesn't repel everyone. Heaven is a place of great pleasure
> > and no progress, and lots of people would like to believe that it exists
> > so that they can go there.
>
> I had forgotten about that: people generally have held and do hold
> such beliefs. Well, such archaic beliefs will surely become more and
> more rare, as progress continues to become more and more
> obvious to people.
>
> > The difference between Heaven and wirehead hedonism or drug
> > addiction is that in Heaven God looks after you so that you don't
> > starve to death or neglect your dependants. Retiring to eternal bliss
> > in a big computer maintained by dedicated AI systems would be
> > the posthuman equivalent of Heaven.
>
> Yes.
And I don't see this as necessarily a bad thing. The argument for progress
as an absolute good could be seen as analogous to the argument for death and
suffering as somehow giving meaning to life.
> > But besides, I happen to have a very strong *predilection*
> > > for learning and finding truth. "To delight in understanding"
> > > has long been my maxim for what I ultimately wish for. So,
> > > even though you're right and I've been motivated to feel that
> > > way by genetic systems out of my direct control (so far), I
> > > would still choose to go on getting my raw pleasure
> > > indirectly.
> >
> > This sort of legacy thinking is the only hope for continuing progress
> > into the indefinite future. There is no reason why you should be
> > able to experience *less* pleasure if you assign it to something
> > you consider worthwhile rather than to idleness, so why not do so?
>
> Right! So I used to think that advanced AIs would (a) study math
> (since everything else will probably be soon exhausted), and (b)
> study gratification enhancement, i.e., how to redesign their brains
> (or their internal organization) to achieve more benefit. But lately
> I've been adding (c) perimeter maintenance or expansion, i.e.,
> a kind of warfare in which each tries to maximize its control of
> resources either at the expense of its neighbors, or working
> together with them, or expanding into free space.
>
> > Moreover, there would be less reason to try to gain pleasure
> > or satisfaction by doing something bad if you could as easily
> > get the same reward by doing something good or doing nothing.
>
> Yes.
>
> > The majority of people who deliberately hurt others do so
> > because they don't consider the badness of their action to
> > outweigh their desire for the expected reward.
>
> Yes. But as soon as we have formal control over our emotions,
> why do something that everyone will condemn when you can
> be a saint and get such as much pleasure. Or, in the future,
> either abiding by the laws or now, grow by expanding your
> control over resources so that you get a bigger and bigger
> brain in effect.
>
All things to look forward to.
--
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070528/4c06e091/attachment.html>
More information about the extropy-chat
mailing list