[ExI] The Anticipation Dilemma (Personal Identity Paradox)

Lee Corbin lcorbin at rawbw.com
Thu Jul 19 05:19:50 UTC 2007

Mike wrote

> On 7/16/07, Lee Corbin <lcorbin at rawbw.com> wrote:
> > It might be easier to discard the notion of surviveability, or, as you
> > suggest, to eliminate the desire to survive, but I don't want to do
> > that! No, no no!   :-)     Reasonable or not, "being alive is better 
> > than being dead, all other things equal" is something pretty hard-
> > wired in me, at least for now.
> so hardwired that you're redefining "alive" versus "dead" to
> make that hardwiring easier to manage. 

Yes, I have equated "being alive" with "surviving".  I don't recall the
exact reasons why in these discussions *survival* is often a more
useful concept.

I'm afraid I don't see your point here, so far.

> I think much of the difficulty of drawing people into these discussions
> is convincing them to release their own ancient hardwiring about identity.

Hmm, "drawing them in" doesn't seem to be the difficulty!  :-)   But yes,
it's like you say with regard to each of us having his or her own
preconceived and rather fixed intuitions regarding whether we'd
survive some given scenario or not.  Or whether, for another example,
to teleport or not, given that teleportation would disassemble one,
transmit the information to another point in space, and then reconstitute
one from different atoms.

> If you were to talk about the issues facing the first AI that
> wakes up and convincingly explains itself to be conscious
> of its own identity - I don't think you'd have the human
> identity bias to relinquish.

That sounds right.  In fact, the general trend, it seems to me,
has been too much anthropomorphising:  I don't think that we
can convincingly say what attitude such an awakening AI
would have.  Clearly it would depend a great deal on whatever
goal structure it had (as has been explored on SL4 and here at
great length).

> Once you lead down that road, it's much easier to introduce
> mind uploading and arrive at the same conclusions with
> (possibly) fewer hangups. 

You mean to say that upon hearing an AI's account, we'd
be in a better position to have our own opinions as to what
constitutes surviving?


More information about the extropy-chat mailing list