[ExI] Unfrendly AI is a mistaken idea.

Lee Corbin lcorbin at rawbw.com
Tue May 29 07:08:29 UTC 2007


Stathis writes

> On 29/05/07, Lee Corbin <lcorbin at rawbw.com> wrote:

> > Suppose that you do achieve a fixed gargantuan size, and become
> > (from our point of view now) an incredibly advanced creature.
> > Even *that*, however, could be miniscule compared to what
> > will be possible even later. 
> 
> Satisfaction need not be directly related to size or quantity, even
> if it turns out that maximal pleasure is. You could just decide at
> some point to be perfectly satisfied with what you have, in which
> case it won't worry you that your neighbour's brain is ten times
> the size of your own.

Okay, I guess that that's one use of "satisfied".  In that sense, I
suppose that a dog is perfectly satisfied being a dog. But ever
being entirely and perfectly satisfied seems dumb.

> I think advanced beings would come to a decision to stop growing,
> or at least slow down at some point, even if only because they will
> otherwise eventually come into conflict with each other. 

Then Darwin will rule that the future belongs to the fearless (as it
more-or-less always has). Slow down?  What a mistake!  That's
just admitting that you've embraced a dead end.

Besides, who's afraid of a little conflict? Say I'm an AI who controls
a small region R1 that is at some distance from another AI who
controls R2, and furthermore say that what lies between us are just
some backward types that someone sooner or later will assimilate.
Then should I just wait around for R2 to take over everything,
which not only imperils me later at the hands of that guy, but in
the meantime denies me the benefit of incorporating more resources?
I think not!

(And for anyone appalled at the seeming ruthlessness implied here,
think of "assimilation" as "leveraged buy outs".)

Lee




More information about the extropy-chat mailing list