[extropy-chat] Fragmentation of computations

Lee Corbin lcorbin at rawbw.com
Sun Mar 25 21:17:22 UTC 2007


Stathis and I appear to be seeing eye to eye on this topic, or
at least you are understanding my argument, even though you
may or may not be in entire agreement with it. As far as I can
tell, I'm agreeing with you.  Check this out:

> >  I am forced to take this stand [that looked-up states don't
> > deliver experience], because if we take an ultimate limit, and
> > merely have static, frozen states scattered across space, then
> > there occurs no activity, no computation, no causality, and no
> > experience whatever.
> 
> If that were so, then the Life inhabitant would be a zombie during
> the looked up frames. An external observer would note the
> patterns on the board, would see light entering the subject's eyes,
> would see the subject pressing the button to register that he has
> perceived the light, but in fact the subject would not perceive
> anything.

Yes---there would *be* no subject.

> Moreover, at the next frame, which is computed, the subject would
> suddenly remember perceiving the light and have no recollection that
> anything unusual had happened. 

That's right.  It takes us back to the by-now old observation that God
could have created the universe 1 second ago, and we'd be none the
wiser.

> We could make the example more complex. Suppose that frame by
> frame, a gradually increasing number of squares on the Life board
> are looked up, while the remainder are computed according to the
> usual rules...

Yes, over an increasing area, values are not computed but pulled-in,
in an arbitrary seeming fashion from some place

> ...What would happen when the squares representing half the
> subject's visual field are looked up? He would notice that he
> couldn't see anything on the right and might exclaim, "Hey...

As you know, he cannot have any such reaction...

> But the computation is proceeding deterministically just the
> same as if all the squares were computed; there is no way it
> could run off in a different direction so that the subject notices
> his perception changing and changes his behaviour accordingly.

Right.

> This is analogous to David Chalmer's "fading qualia" argument
> against the idea that replacement of neurons by silicon chips
> will result in zombification: 

Oh, I never heard of that. Hmm, I suppose that what you and Chalmers
are really asserting is that the subject has fewer "qualia" over time,
but only insofar---to rephrase it---as he is becoming less an actual
subject moment by moment.

I confess to extreme pain in composing any sentence containing the
term "qualia" without taking a derogatory swipe at it. It's the worst
concept in all of philosophy.

But what causes me other pain, not so intense though, is having to
takes sides against functionalism, or at least my understanding of
it. Yes, sigh, I have to agree that "zombification" and "zombies"
as a concept do make sense.  But!  I buy the anti-zombie arguments
TOTALLY when they apply to ordinary causal processes.  That is,
I sneer at the idea that robots or AI could function the way we do
without conscious awareness.

So, sadly, I'm reduced to a 99.9% functionalist.  Show me a 
working device that looks like a duck, sounds like a duck and
acts like a duck, and I'll agree that it's a duck.  But then I will
back off if you somehow are able to make the case that in
this particular instance it is not a causal process.

Lee




More information about the extropy-chat mailing list