[extropy-chat] Fragmentation of computations

Lee Corbin lcorbin at rawbw.com
Wed Mar 28 15:25:03 UTC 2007

Stathis writes

> On 3/27/07, Lee Corbin <lcorbin at rawbw.com> wrote:

> > If taken to the limit, then this particular 50/50 example would mean to me
> > that one state would be computed, then the next looked up, then the next
> > computed and so on. Just as "being a zombie for 1 hour" and then "being 
> > completely conscious for 1 hour" can alternate meaningfully, then so can
> > each, what?, billionth of a second.
> By 50/50 I don't mean that half the frames of the simulation are computed
> and half looked up, but that half the *board* (or half your brain) is
> computed (or biological) and half looked up (or electronic). 

Oh, that's right.  I forgot.  Your case is the more challenging and interesting.

> This 50/50 situation could continue frame after frame for hours. I suppose
> it isn't impossible that the subject's consciousness is rapidly flickering
> during this interval, but it seems a very ad hoc theory to me. Could you
> calculate or measure the frequency of the flickering?

When you write "I suppose that it isn't impossible that the subject's
consciousness is rapidly flickering during this interval", you are 
perhaps referring to the subjective quality of the experience. To the
degree that you are so referring, I don't look at it quite in the same
way. There would, to me, be absolutely no perception of any
flickering, or of anything unusual at all.  It's just that the objective
*amount* of consciousness going on there inside the system must
(if all my hypotheses are right) be diminished by some fraction.

> And what about the fact that, however short the conscious phase
> is, it is still occurring in the setting of half the board being looked up?

Yes.  Recalling that our subjective impression of a "unified consciousness"
is an illusion, a myth that our brains generate because the resulting
organic system integrity has been important for survival evolution (recall
the way that split-brain patients do and say almost anything to preserve
the total integrity), then either pain or pleasure, or consciousness---again
seen from the outside---are occuring in only some places on the board
as you say.

I do admit to this being somewhat ad hoc.  But as I mentioned before,
I have felt forced to this position by a lack of alternatives. On the one
hand, I think it's too unsatisfactory to think that sets of frozen frames,
or rocks, or frames (states) not causally connected, can be conscious.
(I should also hasten to point out that however unclear we may be
about what we mean by that, i.e., by "conscious", it is *perfectly*
clear what choices lie before us in the real world: for example, we
sacrifice trees and mountains readily on moral grounds rather than
harming or killing "sentient" entities.)

And on the other hand, it seems quite inescapable that conscious
robots could, and shortly will exist, and that it will be possible to
take such a program and single-step through its deterministic
execution.  And that such a program---either perhaps suffering
horribly or gaining a great deal of satisfaction---compels us to make
a moral choice again:  do we sacrifice a mountain (composed of
innumerable rocks) by, say, converting it to photons radiating
in all directions, or do we sacrifice entities like ourselves that we
so strongly believe have feelings?

So that's why I adopt this apparently "ad hoc" position.


More information about the extropy-chat mailing list