[extropy-chat] The Anticipation Dilemma (Personal Identity Paradox)

Lee Corbin lcorbin at rawbw.com
Sun Apr 15 15:43:12 UTC 2007


Stathis writes

> > Moreover, so far as *choices* are concerned, I can
> > very, very seldomly do anything about the past.  But
> > determining whether my duplicate will get $10M and
> > deposit it in our account is important.
> 
> But my anticipation module makes me worry more about
> what happens to me than what happens to my copy in the
> next room,

Which could inspire you to error (if you are trying to maximize
your well being). Insofar as choices go, under certain conditions,
such as the above, the survival of your duplicate is optimal for
you (rather than the survival of the instance having to make the
choice).

> in the same way as I worry more about the future than the past.
> In fact, there is a sense in which my relationship to copies of me
> in the past is the same as my relationship to copies of me in the
> next room or in parallel universes that I can't access. Even if I
> could change things so that in some alternate history things
> worked out better for me, I wouldn't thereby anticipate the past more. 

Yes, that's right.

> I have agreed with you all along that this sort of thinking is not
> always rational and consistent, but there is no universal law
> saying that our feelings have to be rational and consistent.

Alas, right too. But we must be rational about our choices,
and so just as in other areas of life, sometimes the urging
of our feelings must be overriden.

> There is no rational reason why I should wish to survive in
> any capacity at all; it's just that humans have evolved with
> the strong desire to survive. We could imagine an AI that
> was perfectly rational and yet had no qualms about terminating
> its existence

Such a being could indeed exist.  If only we could rid the literature
(and some views as expressed on mailing lists) of the notion that
every entity must be motivated to survive!

Lee




More information about the extropy-chat mailing list