<br><br><div><span class="gmail_quote">On 16/06/07, <b class="gmail_sendername">Jef Allbright</b> <<a href="mailto:jef@jefallbright.net">jef@jefallbright.net</a>> wrote:<br><br></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
> In a duplication experiment, one copy of you is created intact, while the<br>> other copy of you is brain damaged and has only 1% of your memories. Is the<br>> probability that you will find yourself the brain-damaged copy closer to 1/2
<br>> or 1/100?<br><br>Doesn't this thought-experiment and similar "paradoxes" make it<br>blindingly obvious that it's silly to think that "you" exist as an<br>independent ontological entity?
<br><br>Prior to duplication, there was a single biological agent recognized<br>as Stathis. Post-duplication, there are two very dissimilar<br>biological agents with recognizably common ancestry. One of these<br>would be recognized by anyone (including itself) as being Stathis.
<br>The other would be recognized by anyone (including itself) as being<br>Stathis diminished.<br><br>Where's the paradox? There is none, unless one holds to a belief in<br>an essential self.</blockquote><div><br>You are of course completely right, in an objective sense. However, I am burdened with a human craziness which makes me think that I am going to be one, and only one, person post-duplication. This idea is at least as firmly fixed in my mind as the desire not to die (another crazy idea: how can I die when there is no absolute "me" alive from moment to moment, and even if there were why should I be a slave to my evolutionary programming when I am insightful enough to see how I am being manipulated?). My question is about how wild-type human psychology leads one to view subjective probabilities in these experiments, not about the uncontested material facts.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">> In the first stage of an experiment a million copies of you are created. In
<br>> the second stage, after being given an hour to contemplate their situation,<br>> one randomly chosen copy out of the million is copied a trillion times, and<br>> all of these trillion copies are tortured. At the start of the experiment
<br>> can you expect that in an hour and a bit you will almost certainly find<br>> yourself being tortured or that you will almost certainly find yourself not<br>> being tortured? Does it make any difference if instead of an hour the
<br>> interval between the two stages is a nanosecond?<br><br>I see no essential difference between this scenario and the previous<br>one above. How can you possibly imagine that big numbers or small<br>durations could make a difference in principle?
<br><br>While this topic is about as stale as one can be, I am curious about<br>how it can continue to fascinate certain individuals.<br></blockquote></div><br>It has fascinated me for many years, in part because different parties see an "obvious" answer and these answers are completely at odds with each other. My "obvious" answer is that we could already be living in a world where multiple copies are being made of us all the time, and we would still have developed exactly the same theory of and attitude towards probability theory as if there were only a single world.
<br><br clear="all"><br><br>-- <br>Stathis Papaioannou