[ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality
Eliezer S. Yudkowsky
sentience at pobox.com
Fri Jun 15 01:58:07 UTC 2007
Suppose I want to win the lottery. I write a small Python program,
buy a ticket, and then suspend myself to disk. After the lottery
drawing, the Python program checks whether the ticket won. If not,
I'm woken up. If the ticket did win, the Python program creates one
trillion copies of me with minor perturbations (this requires only 40
binary variables). These trillion copies are all woken up and
informed, in exactly the same voice, that they have won the lottery.
Then - this requires a few more lines of Python - the trillion copies
are subtly merged, so that the said binary variables and their
consequences are converged along each clock tick toward their
statistical averages. At the end of, say, ten seconds, there's only
one copy of me again. This prevents any permanent expenditure of
computing power or division of resources - we only have one bank
account, after all; but a trillion momentary copies isn't a lot of
computing power if it only has to last for ten seconds. At least,
it's not a lot of computing power relative to winning the lottery, and
I only have to pay for the extra crunch if I win.
What's the point of all this? Well, after I suspend myself to disk, I
expect that a trillion copies of me will be informed that they won the
lottery, whereas only a hundred million copies will be informed that
they lost the lottery. Thus I should expect overwhelmingly to win the
lottery. None of the extra created selves die - they're just
gradually merged together, which shouldn't be too much trouble - and
afterward, I walk away with the lottery winnings, at over 99%
subjective probability.
Of course, using this trick, *everyone* could expect to almost
certainly win the lottery.
I mention this to show that the question of what it feels like to have
a lot of copies of yourself - what kind of subjective outcome to
predict when you, yourself, run the experiment - is not at all
obvious. And the difficulty of imagining an experiment that would
definitively settle the issue, especially if observed from the
outside, or what kind of state of reality could correspond to
different subjective experimental results, is such as to suggest that
I am just deeply confused about the whole issue.
It is a very important lesson in life to never stake your existence,
let alone anyone else's, on any issue which deeply confuses you - *no
matter how logical* your arguments seem. This has tripped me up in
the past, and I sometimes wonder whether nothing short of dreadful
personal experience is capable of conveying this lesson. That which
confuses you is a null area; you can't do anything with it by
philosophical arguments until you stop being confused. Period.
Confusion yields only confusion. It may be important to argue
philosophically in order to progress toward resolving the confusion,
but until everything clicks into place, in real life you're just screwed.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat
mailing list