[extropy-chat] Re: Identity and becoming a Great Old One

Eliezer S. Yudkowsky sentience at pobox.com
Thu Jan 26 22:52:18 UTC 2006


When I saw the subject line, I immediately thought of a bumper sticker 
reading "Great Old One or Bust".

Russell Wallace wrote:
> 
> Another scenario in which the two views might give different results is 
> the wish expressed by some transhumanists that can be summarized as 
> "when I grow up I want to be a Great Old One"; that is, over the next 
> while - say, a million subjective years - they want to continually 
> modify and augment themselves such that the result will be, as they see 
> it, as far beyond the original as the original was beyond an amoeba. (I 
> don't personally accept the analogy even given the premises, but it gets 
> the point across nicely enough that I did happily make use of it for 
> science fiction purposes.)
> 
> To me, as a subscriber to the pattern view, this doesn't make sense 
> because said entity wouldn't be me anymore, so it would be a form of 
> suicide; one could still regard the future existence of such an entity 
> as a cool thing, but why would one have a desire to use oneself in 
> particular as a seed/raw material?

Even the Eliezer of 1996, who knew so much less than I as to constitute 
perhaps a different person, knew the answer to that.

1:  A finite computer has only a finite number of possible states.  In 
the long run, then, you *must* either die (not do any processing past a 
finite number of operations), go into an infinite loop (also implying no 
further processing), or grow beyond any finite bound.  Those are your 
only three options, no matter the physics.  Being human forever isn't on 
the list.  That is not a moral judgment, it's an unarguable mathematical 
fact.  In the long run - the really long run - humanity isn't an option.

2:  The Eliezer of 1996 exists woven into the *now* of 1996, where he 
belongs.  There is no need for him to exist in 2006 also.  It would be 
redundant.  If my worldline were a purely constant state, I would not 
call that immortality, but death, the cessation of processing.  Not all 
change is life, but all life is change.  Where do you draw the line 
between moment-to-moment neural updates and death?  How do you say that 
your self of 1996 has died, but your self of five seconds ago has not?

3:  There ought to be more to the choice to live, than being afraid of 
dying.  It should involve wanting to do certain things, and having fun, 
and learning is part of the fun.  If you keep on learning, and you keep 
on growing stronger, sooner or later you must end up as a Great Old One. 
  But that isn't even my point.  My point is that I cannot "want" to 
live forever.  My finite brain is not large enough to want an infinite 
number of things.  My desire for life, as opposed to my fear of death, 
can only extend as far into the future as my mortal eyes can see.  And 
of course I anticipate that when I have lived one more week, one more 
year, one more century, I will find new things to desire, and so desire 
again to live one more week.  In that sense I will want to live forever, 
by induction on the positive integers.  But my desire to be something 
roughly similar to the Eliezer of today, is also finite, and can only 
extend for so far as today's Eliezer wants to do particular things (in a 
state of mind roughly similar to the Eliezer of today, so that it counts 
as "me" doing them).  When my future self acquires new desires and wants 
to do new things, I will want to do them while being roughly similar to 
the Eliezer of that day, not similar to the Eliezer of 2006.

So thee wishes not to be a Great Old One, Russell Wallace?  I should 
like to know what is thy alternative.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list