[ExI] Next moment, everything around you will probably change

Jef Allbright jef at jefallbright.net
Fri Jun 22 21:20:53 UTC 2007


On 6/22/07, gts <gts_2000 at yahoo.com> wrote:
> On Fri, 22 Jun 2007 03:08:21 -0400, Lee Corbin <lcorbin at rawbw.com> wrote:
>
> > .... recall that according to my
> > beliefs it is inherantly selfish for an instance of me to kill itself
> > immediately so that its duplicate gets $10M, given that one of
> > them has to die and if the instance protects itself, then it does
> > not get the $10M.  That may seem an unsual way to use the
> > word "selfish", but I mean it most literally.  Were "I" and my
> > duplicate in such a situation, "this instance" would gladly kill
> > itself because it would (rightly, I claim) anticipate awakening
> > the next morning $10M richer.
>
> I'm pretty sure that if you kill "this instance" of yourself today, you
> will never again have to worry about money, because you will never wake up.

Damn.  Now I feel compelled to rebut this claim else there might be
more confusion as people assume that I support the competitor of my
competitor.

Lee is absolutely correct in his assertion above, but only in the very
narrow case that both of these agents fully represent (act on behalf
of) a single abstract entity known as Lee.  At post duplication t = 0
(plus some uncertain duration)  this is necessarily the case.

Lee's view becomes progressively less coherent as the agents' contexts
diverge with time and independent circumstance.

Gordon is absolutely correct, but only to the extent that the two
agents represent separate entities.  Some partial congruence of agency
may be maintained, for example, if circumstances were to include a
perceived threat to an entity valued in common such as the abstract
Lee entity, or preferably, by some sort of cooperative agreement or
contract.

My point is that while Lee and Gordon are both correct in their narrow
opposing all-or-nothing interpretations, they are both wrong in
missing the more encompassing definition of personal identity as **the
extent** to which an agent represents a particular abstract entity, a
construct in the mind of any observer(s), including the mind of the
agent itself.

Agency-based personal identity naturally accommodates our present
everyday view of a constant relationship between the agent and the
abstract entity which it represents. Even though both of these change
with time, the relationship -- the personal identity --  is constant.
Agency-based personal identity is more extensible because it
accommodates the idea of duplicate persons with no paradox and no
hanging question of what constitutes sufficient physical/functional
similarity.  And agency-based personal identity accommodates future
scenarios of variously enabled/limited variants of oneself performing
tasks on behalf of a common entity and viewed as precisely *that*
entity for social/moral/judicial purposes by itself (itselves) and
others.  Perhaps counter-intuitively, agency-based personal identity
shows us that agents more specifically differentiated in their
function will maintain the entity-agent relationship more reliably due
to less potential for conflict based on similar values expressed
within disparate contexts.

- Jef



More information about the extropy-chat mailing list