[ExI] Raymond Tallis: You won't find consciousness in the brain

Emlyn emlynoregan at gmail.com
Wed Jan 13 01:46:47 UTC 2010


I've responded to this below. Summary: I don't buy it. Also, just for
fun, I've put my description of what I think subjective conscious
experience is and does at the bottom of this email, and am hoping for
feedback.

2010/1/11 Damien Broderick <thespike at satx.rr.com>:
> New Scientist: You won't find consciousness in the brain
>
> <http://www.newscientist.com/article/mg20527427.100-you-wont-find-consciousness-in-the-brain.html>
>
> 7 January 2010 by Ray Tallis
>
> [Raymond Tallis wrote a wonderful deconstruction of deconstruction and
> poststructuralism, NOT SAUSSURE]
>
> MOST neuroscientists, philosophers of the mind and science
> journalists feel the time is near when we will be able to explain
> the mystery of human consciousness in terms of the activity of the
> brain. There is, however, a vocal minority of neurosceptics who
> contest this orthodoxy. Among them are those who focus on claims
> neuroscience makes about the preciseness of correlations between
> indirectly observed neural activity and different mental functions,
> states or experiences.
>
> This was well captured in a 2009 article in Perspectives on
> Psychological Science by Harold Pashler from the University of
> California, San Diego, and colleagues, that argued: "...these
> correlations are higher than should be expected given the (evidently
> limited) reliability of both fMRI and personality measures. The high
> correlations are all the more puzzling because method sections
> rarely contain much detail about how the correlations were
> obtained."
>
> Believers will counter that this is irrelevant: as our means of
> capturing and analysing neural activity become more powerful, so we
> will be able to make more precise correlations between the quantity,
> pattern and location of neural activity and aspects of
> consciousness.
>
> This may well happen, but my argument is not about technical,
> probably temporary, limitations. It is about the deep philosophical
> confusion embedded in the assumption that if you can correlate
> neural activity with consciousness, then you have demonstrated they
> are one and the same thing, and that a physical science such as
> neurophysiology is able to show what consciousness truly is.

I don't think there really is such a confusion. I'm pretty sure that
the people studying the structure of the brain, looking for correlates
to consciousness, know about this; we are all subjectively conscious
beings, after all. It's just that you have to start somewhere; the
approach is to keep finding mechanism, keep narrowing things down, and
hope that along the way better information and better understanding
will yield insight on how to find subjective consciousness itself.
Given that currently, regarding subjective first person conscious
experience, we can barely even frame the questions we want to ask,
digging in hard into areas that we can make sense of is a great
approach, particularly given than the one and the other must be
massively interrelated.

> Many neurosceptics have argued that neural activity is nothing like
> experience, and that the least one might expect if A and B are the
> same is that they be indistinguishable from each other. Countering
> that objection by claiming that, say, activity in the occipital
> cortex and the sensation of light are two aspects of the same thing
> does not hold up because the existence of "aspects" depends on the
> prior existence of consciousness and cannot be used to explain the
> relationship between neural activity and consciousness.

Ok, this immediately stops making much sense. Tell me if this is what
he is saying: the sensation of light, and activity in the occipital
cortex are different things, but we might say the activity in the
cortex represents the light. But this representation only makes sense
in the context of something which can understand the representation,
which is consciousness, which puts the cart before the horse?

> This disposes of the famous claim by John Searle, Slusser Professor
> of Philosophy at the University of California, Berkeley: that neural
> activity and conscious experience stand in the same relationship as
> molecules of H[2]O to water, with its properties of wetness,
> coldness, shininess and so on.

Is he talking here about mind being an epiphenomenon of the brain? Or
is it something more mundane; water is made of H20 molecules, which in
aggregate have these middleworld properties as described.

> The analogy fails as the level at
> which water can be seen as molecules, on the one hand, and as wet,
> shiny, cold stuff on the other, are intended to correspond to
> different "levels" at which we are conscious of it.

Wait, does this make sense? Wasn't the preceding sentence using water
as an analogy, not talking about how we were conscious of it?

> But the
> existence of levels of experience or of description presupposes
> consciousness. Water does not intrinsically have these levels.

This is surely playing fast and loose with language. At best I can
understand this as saying that without conscious experience, the world
is just a dance of atoms. There is nothing "wet" because you need a
mind to experience "wet". Yet wetness is also operational; it is a
loose high level description of how water (groups of H20 molecules)
will interact with other substances (it might infuse porous ones, for
instance), and there is no need for the conscious observer to be
present, in theory, for that to still happen.

It's a disingenuous bit of wordplay though, no? At many scales, simple
things can group together and exhibit higher level group behaviours
that aren't necessarily obvious from the basics of the elements, and
aren't like the elements. One H20 molecule really has nothing about it
of wetness or shinyness or coldness; no one would describe one
molecule as wet. In aggregate, however, the grouped substance does.

> We cannot therefore conclude that when we see what seem to be neural
> correlates of consciousness that we are seeing consciousness itself.

Sure, that's why they're called correlates.

> While neural activity of a certain kind is a necessary condition for
> every manifestation of consciousness, from the lightest sensation to
> the most exquisitely constructed sense of self, it is neither a
> sufficient condition of it, nor, still less, is it identical with
> it.

For the activity of individual neurons, I'll accept this. But for the
whole system of neurons, it's not at all clear. The wordplay about
water above doesn't in any way tell us about it. Groups of things
really have properties that the individuals do not, in that they have
higher level behaviours which aren't similar to the behaviour of their
elements. An H2O molecule is not wet, but water is. Neurons are very
unlikely to have subjective conscious (and their molecules and atoms
even less so), but that doesn't tell us whether the system of neurons
is. We *don't know* what subjective consciousness is, so we can't say.
It is probably safe to say that the neural system is necessary for it,
but suffient and or equivalent? It could be, it might not be. Occam's
razor says to me that it is more likely that the neural system is
sufficient for consciousness, because otherwise we are looking for
some other mechanism, and there's no evidence of any.

But anyway, his argument that a group of things can't have different
properties to that of the individual things, it's wrong.

> If it were identical, then we would be left with the insuperable
> problem of explaining how intracranial nerve impulses, which are
> material events, could "reach out" to extracranial objects in order
> to be "of" or "about" them.

wtf? Where is there reaching out? There is no necessity for anything
to magically breach the skull. We get input, it goes into the neural
system, it gets processed, it and processed versions of it get stored
as memories and as modifications to our mental processes. There is a
representation inside the brain. The mechanism of the brain can work
(must work!) entirely in terms of these representations. That we then
have subjective experience of some piece of this working, including
feelings about the things represented (what are qualia but feeling
about representations), is mysterious, but we have no reason to
suppose it is anything other than a higher level property of our
neural hardware (otherwise, what is it?). The idea of "reaching out"
is ridiculous. If we were directly experiencing the outside world
somehow, rather than experiencing reconstituted feelings about
representations of things, our mind would have all the weird failures
it has; we wouldn't be able to have experiences of the world different
to other people's experiences.

> Straightforward physical causation
> explains how light from an object brings about events in the
> occipital cortex. No such explanation is available as to how those
> neural events are "about" the physical object. Biophysical science
> explains how the light gets in but not how the gaze looks out.

The gaze - is this a reference to Foucault? (reaches for his cudgel)

No gaze looks out. If we ignore first person subjective experience for
a moment, everything else about the brain makes sense in terms of
information processing. A robot can be self aware, in that it would
have in its memory a collection of representations of things in the
world, one of which is itself. Its processing would include some
places where it was primary, and others where it was just one more
thing in the field of things. A sophisticated enough program should be
able to do all the things that we do, even come up with the same kinds
of thoughts and ideas; if we accept that all the mechanism of the mind
is in the brain, then it must be in principle computable, we just
don't know how to do all that stuff yet.

But it doesn't follow that this robot would be subjectively conscious
like we are. If this subjective consciousness is the "gaze", then
surely it doesn't look out, but merely looks apon the internal
representation of what's out there.

> Many features of ordinary consciousness also resist neurological
> explanation. Take the unity of consciousness. I can relate things I
> experience at a given time (the pressure of the seat on my bottom,
> the sound of traffic, my thoughts) to one another as elements of a
> single moment. Researchers have attempted to explain this unity,
> invoking quantum coherence (the cytoskeletal micro-tubules of Stuart
> Hameroff at the University of Arizona, and Roger Penrose at the
> University of Oxford), electromagnetic fields (Johnjoe McFadden,
> University of Surrey), or rhythmic discharges in the brain (the late
> Francis Crick).
>
> These fail because they assume that an objective unity or uniformity
> of nerve impulses would be subjectively available, which, of course,
> it won't be. Even less would this explain the unification of
> entities that are, at the same time, experienced as distinct. My
> sensory field is a many-layered whole that also maintains its
> multiplicity. There is nothing in the convergence or coherence of
> neural pathways that gives us this "merging without mushing", this
> ability to see things as both whole and separate.

Does this make any sense to anyone? If you think of the brain as at
least in part an information processing organ, then it will have
representations of its inputs and itself at many different levels
simultaneously (the colours brown and green and also bark and also the
tree and also the forest), grouped in useful ways, including temporal
grouping. That he can relate the feeling of his arse to his thoughts
is in no doubt, but how does this relate to some "unity of
consciousness"? Why invoke special magic for something so mundane?

> And there is an insuperable problem with a sense of past and future.
> Take memory. It is typically seen as being "stored" as the effects
> of experience which leave enduring changes in, for example, the
> properties of synapses and consequently in circuitry in the nervous
> system.

Absolutely.

> But when I "remember", I explicitly reach out of the present
> to something that is explicitly past.

wtf??? How? With magic powers? Is this guy insisting that we have
direct experience of the physical world, including the physical world
of the past??

All that is required here is that you have an internal representation
of the past, tucked away in your brain somewhere.

> A synapse, being a physical
> structure, does not have anything other than its present state.

Yes, just as computer memory only has its present state, there are no
time machines.

> It does not, as you and I do, reach temporally upstream from the
> effects of experience to the experience that brought about the
> effects.

Fuck a duck! All this requires is representation of the past. If you
accept that we have subjective conscious awareness of some part of the
processing of our minds, and that we can't explain that, there is no
reason to invoke extra unknowns to describe remembering the past.

Clearly, we have a representation of the past encoded in our brains,
which we use to reconstitute the past. We have encodings of what
happened in the past, including representations (not too
sophisticated, one might add) of how we felt. It is clear to me that
as we recall the past in this way, as we imagine it, we then
reconstitute new, current feelings (qualia) relating to it, as if it
were happening now.

The best evidence that this is the case, and that we don't *actually
reach back into the past*, is that we get it wrong a lot, mostly wrong
actually, if you are to believe the science. Our memories (our
representations of the past) are incomplete, and we fill in the blanks
when we load them back up with plausible stuff. Sometimes we fabricate
memory entirely. If you to "explicitly reach out of the present to
something that is explicitly past", into the real past, surely all our
recollections would be perfect and in perfect agreement?

> In other words, the sense of the past cannot exist in a
> physical system.

Information systems do this with boring consistency. They store
records of what happened in the past, who did what, what pieces of
paper were seen by whom, etc. Your email client has a record of its
past. A diary is a record of the past. These are (parts of) a physical
system.

> This is consistent with the fact that the physics
> of time does not allow for tenses: Einstein called the distinction
> between past, present and future a "stubbornly persistent illusion".

What? why is this relevant?

> There are also problems with notions of the self, with the
> initiation of action, and with free will. Some neurophilosophers
> deal with these by denying their existence, but an account of
> consciousness that cannot find a basis for voluntary activity or the
> sense of self should conclude not that these things are unreal but
> that neuroscience provides at the very least an incomplete
> explanation of consciousness.

The basis for voluntary activity is straightforward; a bit of your
brain is responsible for taking in a lot of input, including recent
sensory information, memory, decisions and hints from other bits of
the brain, and deciding on a course of action. In that it decides,
based on whatever algorithms it uses, it is voluntary.

That we have the sense of self, of subjective consciousness, no one
will dispute this is mysterious. That we feel like we make decisions
freely, rather than as the result of an algorithm is not at all
mysterious; we feel all kinds of misleading things. Our brains are
weird as hell, and mostly you shouldn't trust your brain too far; I
certainly wouldn't turn my back on mine.

The big mystery to my mind is that we have subjective consciousness at
all. It doesn't seem to do anything useful, that you couldn't do
without it. And yet it certainly has a function, has physical
presence, because we can talk about it, think about it. It can't be
off in some other distinct non-physical realm, because it can affect
our brains. I guess a delusion could also do that, but if its a
delusion, it's one shared by us all, and hardly counts as such.

> I believe there is a fundamental, but not obvious, reason why that
> explanation will always remain incomplete - or unrealisable. This
> concerns the disjunction between the objects of science and the
> contents of consciousness. Science begins when we escape our
> subjective, first-person experiences into objective measurement, and
> reach towards a vantage point the philosopher Thomas Nagel called
> "the view from nowhere". You think the table over there is large, I
> may think it is small. We measure it and find that it is 0.66 metres
> square. We now characterise the table in a way that is less beholden
> to personal experience.
>
> Thus measurement takes us further from experience and the phenomena
> of subjective consciousness to a realm where things are described in
> abstract but quantitative terms. To do its work, physical science
> has to discard "secondary qualities", such as colour, warmth or
> cold, taste - in short, the basic contents of consciousness. For the
> physicist then, light is not in itself bright or colourful, it is a
> mixture of vibrations in an electromagnetic field of different
> frequencies. The material world, far from being the noisy,
> colourful, smelly place we live in, is colourless, silent, full of
> odourless molecules, atoms, particles, whose nature and behaviour is
> best described mathematically. In short, physical science is about
> the marginalisation, or even the disappearance, of phenomenal
> appearance/qualia, the redness of red wine or the smell of a smelly
> dog.

Yes

> Consciousness, on the other hand, is all about phenomenal
> appearances/qualia. As science moves from appearances/qualia and
> toward quantities that do not themselves have the kinds of
> manifestation that make up our experiences, an account of
> consciousness in terms of nerve impulses must be a contradiction in
> terms. There is nothing in physical science that can explain why a
> physical object such as a brain should ascribe appearances/qualia to
> material objects that do not intrinsically have them.
>
> Material objects require consciousness in order to "appear". Then
> their "appearings" will depend on the viewpoint of the conscious
> observer. This must not be taken to imply that there are no
> constraints on the appearance of objects once they are objects of
> consciousness.
>
> Our failure to explain consciousness in terms of neural activity
> inside the brain inside the skull is not due to technical
> limitations which can be overcome. It is due to the
> self-contradictory nature of the task, of which the failure to
> explain "aboutness", the unity and multiplicity of our awareness,
> the explicit presence of the past, the initiation of actions, the
> construction of self are just symptoms. We cannot explain
> "appearings" using an objective approach that has set aside
> appearings as unreal and which seeks a reality in mass/energy that
> neither appears in itself nor has the means to make other items
> appear. The brain, seen as a physical object, no more has a world of
> things appearing to it than does any other physical object.
>

The brain is an information processing and control system powerhouse.
It also has this associated subjective consciousness, which appears
related to / to have access to only a very small part of the brain,
given how unaware we are of our own internal workings.

The way the author talks about subjective consciousness, he makes it
sound like an indivisible whole, atomic. Yet our brains & minds are
clearly anything but. The very fact that we are so ignorant of how our
mind works shows that the parts which correlate directly with
consciousness have direct access to very little of the rest of the
brain.

I think I've said enough about why I think this guy is wrong.

How about I go out on a limb and say what I think about subjective
consciousness? I can't say how it works, but I have some ideas on why
it exists and what it's for.

It seems to me that subjective consciousness is simply a module of the
mind, which is for something very specific, and that is to feel
things. Qualia like the "redness of red" and emotions like anger share
the property of being felt; they are the same kind of thing. It's
clear to me at least that this is a functional module, in that it
takes information from other parts of the brain as input (for example,
the currently imagined representation of the world, whether that is
current or a reloaded past), produces feelings (how? No idea), then
outputs that back to the other parts of the brain, affecting them in
appropriate ways. The other parts of the brain do everything else;
they create all our "ideas" (and then we get to feel that "aha"
moment"), they make all our decisions (to which are added some
feelings of volition), they do all the work. The feelings
produced/existing in the subjective consciousness module are like side
effects of all that, but they go back in a feedback loop to influence
the future operation of the other parts.

Why would you have something like this? What can this do that a
non-subjectively conscious module couldn't? Why not just represent
emotions (with descriptive tags, numerical levels, canned processing
specific to each one), why actually *feel* them? To me that's as big a
question as how. I can't explain that.

What's interesting though is how the purpose of the mechanism of
feeling seems to be to guide all the other areas, to steer them. eg:
some bits of the brain determine that we are in a fight-or-flight
situation. They decide "flight". They inform the feeling module
(subjective consciousness) that we need to feel fear. The feeling
module does that ("Fear!"), and informs appropriate other parts of the
brain to modify their processing in terms appropriate to fear
(affecting decision making, tagging our memories with "I was scared
here", even affecting our raw input processing). So we feel scared and
do scared things. Probably  most importantly, we can break "not enough
information" deadlocks in decision making with "well what would the
fearful choice be" - that's motivation right there.

It's a blunt instrument, which might be useful if you didn't have much
else in terms of executive processes. It is really weird in our brains
though, because we do, we have fantastic higher level processing that
can do all kinds of abstract reasoning and complex planning and
sophisticated decision making. Why do we also need the bludgeons of
emotions like anger, restlessness, boredom, happiness? So we have
roughly two systems doing similar things in very different ways, which
you'd expect to fight. And thus the human condition :-)

But where it would not be weird is in a creature without all this
higher level processing stuff. Never mind how evolution came up with
it in the first place (evolution is amazing that way) but given that
it did, it would be a great platform for steering, motivating, guiding
an unintelligent being. So what I'm getting at is, it's a relic from
our deep evolutionary past. It's not higher cognitive functioning at
all. Probably most creatures are subjectively conscious. They don't
have language, they might not have much concept of the past or future,
but they feel, just as we do (if in a less sophisticated way). They
really have pleasure and pain and the redness of red. And suffering.

We have a conceit that we (our subjectively conscious selves) are
*really* our higher order cognitive processes, but I think that's
wrong.

We take pride in our ideas, especially the ones that come out of
nowhere, but that should be a clue. They come out of "nowhere" and are
simply revealed to the conscious us. "Nowhere" is the modern
information processing bits of the brain, the neocortex, which does
the heavily lifting and informs us of the result without the working.

We claim our own decisions, but neuroscience, as well as simple old
psychology, keeps showing us that decisions are made before we are
aware of them, and that we simply rationalize volition where it
doesn't exist. How do we make decisions? Rarely in a step-by-step
derivational, rational way. More often they are "revealed" to us,
they're "gut instinct". They come from some other part of the brain
which simply informs "us" of the result.

We think of the stream of internal dialogue, the voice in the mind, as
truly "us", but where do all those thoughts come from? You can't
derive them. It's like we are reading a tape with words on it, which
comes from somewhere else; it's being sent in by another part of the
brain that we don't have access to, again. We read them, the
subjective-consciousness module adds feelings of ownership to them,
and decorates them with emotional content, and the result feeds back
out to the inaccessible parts of the brain, to influence the next
round of thoughts on the tape.

In short, I think that the vast majority of the brain is stuff that
our "subjective" self can't access except indirectly through inputs
and outputs. Most of the things that make us smart humans are actually
out in this area, and are plain old information processing stuff, you
could replace them with a chip, and as long as the interfaces were the
same, you'd never know. I think the treasured conscious self is less
like an AGI than like a tiny primitive animal, designed for fighting
and fucking and fleeing and all that good stuff, which evolution has
rudely uplifted by cobbling together a super brain and stapling it to
the poor creature.

I hope I'm right. If this is actually how we work, then the prospect
of seriously hacking our brains is very good. You should be able to
replace existing higher level modules with synthetic equivalents (or
upgrades). You should be able to add new stuff, as long as it obeys
the API (eg: add thoughts to the thought tape? take emotional input
and modify accordingly?)

Also, as to correlates of subjectively conscious experience in the
mind, we should be looking for something that exists everywhere, not
just in us. That might narrow it down a bit ;-)

-- 
Emlyn

http://emlyntech.wordpress.com - coding related
http://point7.wordpress.com - ranting
http://emlynoregan.com - main site



More information about the extropy-chat mailing list