[extropy-chat] Fragmentation of computations

Stathis Papaioannou stathisp at gmail.com
Thu Mar 29 12:01:26 UTC 2007


On 3/29/07, Lee Corbin <lcorbin at rawbw.com> wrote:

> By 50/50 I don't mean that half the frames of the simulation are computed
> > and half looked up, but that half the *board* (or half your brain) is
> > computed (or biological) and half looked up (or electronic).
>
> Oh, that's right.  I forgot.  Your case is the more challenging and
> interesting.
>
> > This 50/50 situation could continue frame after frame for hours. I
> suppose
> > it isn't impossible that the subject's consciousness is rapidly
> flickering
> > during this interval, but it seems a very ad hoc theory to me. Could you
> > calculate or measure the frequency of the flickering?
>
> When you write "I suppose that it isn't impossible that the subject's
> consciousness is rapidly flickering during this interval", you are
> perhaps referring to the subjective quality of the experience. To the
> degree that you are so referring, I don't look at it quite in the same
> way. There would, to me, be absolutely no perception of any
> flickering, or of anything unusual at all.  It's just that the objective
> *amount* of consciousness going on there inside the system must
> (if all my hypotheses are right) be diminished by some fraction.


Yes, it would be rather as if I were dead every other second: I wouldn't
notice anything, but my consciousness per unit time would be halved.
Actually, I don't know if this would be such a bad thing, if zombie me
didn't do anything I wouldn't do during the dead periods. Suppose I were
offered a 10% salary increase if I volunteered to be dead half the time. If
I took it up, I would only be able to really enjoy 55% of my current salary;
on the other hand, I would *think* I was enjoying 110% of my current salary,
at the cost of only a twinge of discomfort from knowing what was really
going on. Should I take up the offer?

I can think of a response to the "fading qualia" argument I have cited which
does not require flickering. In some cases of cortical blindness in which
the visual cortex is damaged but the rest of the visual pathways are intact,
patients insist that they are not blind and come up with explanations as to
why they fall over and walk into things, eg. they accuse people of putting
obstacles in their way while their back is turned. This isn't just denial
because it is specific to cortical lesions, not blindness due to other
reasons. If these patients had advanced cyborg implants they could
presumably convince the world, and be convinced themselves, that their
visual perception had not suffered when in fact they can't see a thing.
Perhaps gradual cyborgisation of the brain would lead to a similar, gradual
fading of thoughts and perceptions; the external observer would not notice
any change and the subject would not notice any change either, until he was
dead, replaced by a zombie. The analogy could also be applied to gradual
replacement with looked up components.

Having said this, I still think the simplest explanation consistent with all
the facts is that what you call non-causally connected states are as
conscious as the causally connected ones.

> And what about the fact that, however short the conscious phase
> > is, it is still occurring in the setting of half the board being looked
> up?
>
> Yes.  Recalling that our subjective impression of a "unified
> consciousness"
> is an illusion, a myth that our brains generate because the resulting
> organic system integrity has been important for survival evolution (recall
> the way that split-brain patients do and say almost anything to preserve
> the total integrity), then either pain or pleasure, or
> consciousness---again
> seen from the outside---are occuring in only some places on the board
> as you say.
>
> I do admit to this being somewhat ad hoc.  But as I mentioned before,
> I have felt forced to this position by a lack of alternatives. On the one
> hand, I think it's too unsatisfactory to think that sets of frozen frames,
> or rocks, or frames (states) not causally connected, can be conscious.
> (I should also hasten to point out that however unclear we may be
> about what we mean by that, i.e., by "conscious", it is *perfectly*
> clear what choices lie before us in the real world: for example, we
> sacrifice trees and mountains readily on moral grounds rather than
> harming or killing "sentient" entities.)


You can't harm the consciousness in a rock by crushing the rock, because the
whole idea is that it does not depend on any particular configuration of
matter. In the final analysis, the consciousness is entirely contained in
the not-actually-realized table mapping arbitrary physical states to
computational states (in order to realize it, we would have to build and
program a computer with the consciousness: the rock is superfluous to this
process). Thus, this rock-is-conscious idea is another way of saying that
all conscious computations are realized merely by virtue of their status as
platonic objects. Strange though it may seem, it is consistent with the
basic idea of functionalism, which is that consciousness resides in the
form, not the substance.

And on the other hand, it seems quite inescapable that conscious
> robots could, and shortly will exist, and that it will be possible to
> take such a program and single-step through its deterministic
> execution.  And that such a program---either perhaps suffering
> horribly or gaining a great deal of satisfaction---compels us to make
> a moral choice again:  do we sacrifice a mountain (composed of
> innumerable rocks) by, say, converting it to photons radiating
> in all directions, or do we sacrifice entities like ourselves that we
> so strongly believe have feelings?
>
> So that's why I adopt this apparently "ad hoc" position.
>
> Lee
>

Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070329/127de425/attachment.html>


More information about the extropy-chat mailing list