[extropy-chat] Fragmentation of computations

Stathis Papaioannou stathisp at gmail.com
Sun Mar 25 04:44:36 UTC 2007


On 3/25/07, Lee Corbin <lcorbin at rawbw.com> wrote:

Let me consider a concrete case, which could be implemented on a Life
> board. Let's say that 100 trillion generations follow one another
> according
> to the rules of Life, and are implemented in a real, causally
> deterministic
> machine of some kind (thereby satisfying my causality criterion).
> Let's further suppose that this emulates the conscious experience of
> someone or something. Then your experiment says that if we were to
> checkpoint generation number 1, and checkpoint generation number
> 50 trillion, then we might re-run the computation, except doing the
> second half first.
>
> That's perfectly sensible, and would deliver, in my opinion, almost all
> the
> experience that the individual in question obtained during the first run.
> Of all 100 trillion states, all but the initial state is/was computed
> during
> the first run.  But in the scenario where the second half is re-run first,
> then state number 50 trillion is not *caused*, is not *computed* by
> any previous state.  It is pulled off the shelf, so to speak.  It is
> merely
> "looked up".
>
> So what?  What is one or two states out of 100 trillion?  That's why, to
> me, this Greg Egan type thought experiment makes perfect sense.
>
> But what if only 9 out of 10 generations are computed, and the other 1
> out of 10 are looked up?  Then I must suppose that the extent of the
> conscious experience is diminished by one-tenth!   I am forced to take
> this stand, because if we take an ultimate limit, and merely have static,
> frozen states scattered across space, then there occurs no activity, no
> computation, no causality, and no experience whatever.


If that were so, then the Life inhabitant would be a zombie during the
looked up frames. An external observer would note the patterns on the board,
would see light entering the subject's eyes, would see the subject pressing
the button to register that he has perceived the light, but in fact the
subject would not perceive anything. Moreover, at the next frame, which is
computed, the subject would suddenly remember perceiving the light and have
no recollection that anything unusual had happened.

We could make the example more complex. Suppose that frame by frame, a
gradually increasing number of squares on the Life board are looked up,
while the remainder are computed according to the usual rules. What would
happen when the squares representing half the subject's visual field are
looked up? He would notice that he couldn't see anything on the right and
might exclaim, "Hey, I think I'm having a stroke!" But the computation is
proceeding deterministically just the same as if all the squares were
computed; there is no way it could run off in a different direction so that
the subject notices his perception changing and changes his behaviour
accordingly. This is analogous to David Chalmer's "fading qualia" argument
against the idea that replacement of neurons by silicon chips will result in
zombification:

http://consc.net/papers/qualia.html

Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070325/4b3efe78/attachment.html>


More information about the extropy-chat mailing list