[ExI] The relevance of glutamate in color experience

Jason Resch jasonresch at gmail.com
Mon May 2 13:25:57 UTC 2022


On Sun, May 1, 2022, 10:04 PM Brent Allsop <brent.allsop at gmail.com> wrote:

>
> Hi Jason,
>
> Yes, this is the Neuro Substitution Argument for functionalism
> <https://canonizer.com/topic/79-Neural-Substitn-Argument/1-Agreement> Stathis,
> I and others have been rehashing, forever, trying to convince the other
> side..  Stathis, Chalmers, and other functionalists
> <https://canonizer.com/topic/88-Theories-of-Consciousness/18-Qualia-Emerge-from-Function>
> believe they must accept functionalism because of this argument.  This is a
> specific example of the 'dancing qualia' contradiction (one of many) which
> results if you accept this argument.
>
> I like to point out that this argument is dependent on two assumptions.
> 1., that all the neurons do is the same thing discrete logic gates do in
> abstract computers.  2. That the neuro substitution will succeed.  If
> either of these two fail, the argument doesn't work.
>

I think you may be misapplying the computational universality argument as
it pertains to machines and minds.

What I, and other functionalists, claim is not that neurons are like logic
gates or that the brain is like a computer, but quite the opposite:

It's not that the brain is like a computer but that a computer (with the
right program) can be like a brain.

The universality of computers means computers are sufficiently versatile
and flexible that they can predict the behavior of neurons. (Or molecules,
or atoms, or quarks, or chemical interactions, or physical field
interactions, or anything that is computable).

Your assumption that functional equivalence is impossible to achieve
between a computer and a neuron implies neurons must do something
uncomputable. They must do something that would take a Turing machine an
infinite number of steps to do, or have to process an infinite quantity of
information in one step, but what function could this be in a neuron and
how could it be relevant to neuronal behavior?

All known physical laws are computable, so if everything neurons do is in
accordance with known physical laws, then neurons can in principle be
perfectly simulated.

For your argument to work, one must suppose there are undiscovered,
uncomputable physical laws which neurons have learned to tap into and that
it is important to their function and behavior.

But what is the motivation for this supposition?

For Penrose it was the idea that human mathematicians can know something is
true that a consistent machine following fixed rules could not prove. But
this is flawed in many ways. Truth is different from proof, and human
mathematicians are not consistent, they don't necessarily stay within one
system when reasoning, and further, they are subject to the same Godelian
constraints.

For example: "Roger Penrose cannot consistently believe this sentence is
true." You and I can see it as true and believe it, but Penrose cannot. He
is stuck the same way any consistent proving machine can be stuck. He can
only see the sentence as true if he becomes inconsistent himself.

To assume something as big as a new class of uncomputable physical laws,
for which we have no indication of or evidence for, requires some
compelling reason. What is the reason in your case?




> Steven Leahar, I, and others (there are more functionalists than us)
> predict that the neurons are doing more than just the kind of discrete
> logic function abstract computers do.
>

Certainly, neurons are far more complex than AND or XOR gates. I agree with
you there, but that is irrelevant to the point:

The right arrangement of logic gates together with an unbounded working
memory (a computer), is able to be programmed to perform any computable
mathematical function or any computable physical simulation of any physical
object following computable physical laws.

So the question to answer is:
What is the nature of this uncomputable physics, abd how does the neuron
use it?

It's okay to say: I don't know. But it's important to recognize what you
are required to assume for this argument to hold: the presence and
importance of a new, uncomputable physics, which playa a necessary
functional role in the behavior of neurons. Neurons must decide to fire or
not fire based on this new physics, and presently known factors such as ion
concentrations and inputs and dendritic connections etc. are insufficient
to make the determination of whether or not s neuron will fire.

Somehow they use qualities, like redness and greenness to represent
> information, in a way that can be "computationally bound" doing
> similar computation to what the mere discrete logic gates are doing, when
> they represent things with 1s and 0s.
>

Why presume that red and green must be low level constructs rather than
high level constructs?

  A required functionality is if redness changes to blueness, or anything
> else, the system must behave differently and report the difference.  But
> this functionality isn't possible in abstract systems, because no
> matter how the substrate changes, it still functions the same.  This is by
> design.  (i.e. no matter what is representing a value like '1', whether
> redness or bluenness or +5 volts, or punch in paper..., you need a
> different dictionary for each different representation to tell you what is
> still representing the 1.)  Redness, on the other hand, is just a fact.  No
> dictionary required, and substrate independence is impossible, by design.
>

That redness appears as a brute fact to a mind does not mean it must be a
primitive brute fact or property of the substrate itself.

Again from Chalmers:

“Now, let us take the system’s “point of view” toward what is going on.
What sort of judgments will it form? Certainly it will form a jdugement
such as “red object there,” but if it is a rational, reflective system, we
might also expect it to be able to reflect on the process of perception
itself. How does perception “strike” the system, we might ask?
The crucial feature here is that when the system perceives a red object,
central processes do not have direct access tot the object itself, and they
do not have direct access to the physical processes underlying perception.
All that these processes have access to is the color information itself,
which is merely a location in a three-dimensional information space. When
it comes to linguistically reporting on the situation, the system cannot
report, “This patch is saturated with 500- to 600-nanometer reflections,”
as all access to the original wavelengths is gone. Similarly, it cannot
eport about the neural structure, “There’s a 50-hertz spiking frequency
now,” as it has no direct access to neural structures. The system has
access only to the location in information space.
Indeed, as far as central processing is concerned, it simply finds itself
in a location in this space. The system is able to make distinctions, and
it knows it is able to make distinctions, but it has no idea how it does
it. We would expect after a while that it could come to label the various
locations it is thrown into–”red,” “green,” and the like-and that it would
be able to know just which state it is in at a given time. But when asked
just how it knows, there is nothing it can say, over and above “I just
know, directly.” If one asks it, “What is the difference between these
states?” it has no answer to give beyond “They’re just different,” or “This
is one of those,” or “This one is red, and that one is green.” When pressed
as to what that means, the system has nothing left to say but “They’re just
different, qualitatively.” What else could it say?”

“It is natural to suppose that a system that can know directly the location
it occupies in an information space, without having access to any further
knowledge, will simply label the states as brutely and primitively
different, differing in their “quality.” Certainly, we should expect these
differences to strike the system in an “immediate” way: it is thrown into
these states which in turn are immediately available for the direction of
later processing; there is nothing inferential, for example, about its
knowledge of which state it is in. And we should expect these states to be
quite “ineffable”: the system lacks access to any further relevant
information, so there is nothing it can say about the states beyond
pointing to their similarities and differences with each other, and to the
various associations they might have. Certainly, one would not expect the
“quality” to be something it could explicate in more basic terms.”



> So, the prediction is that it is a fact that something in the brain has a
> redness quality.
>

What are your base assumptions here from which this prediction follows? It
it something like:

1. Qualia like red seem primitively real
2. Therefore we should assume qualia like red are primitively real
3. It then follows the brain must use this primitively real stuff to
compose red experiences

I see the logic of this, but how things seem and how reality is, are often
different. Given that this line of reasoning leads to fading/dancing/absent
qualia (short of requiring that neuronal behavior involves an unknown
uncomputable physics), raises the bar of doubt for me, enough to tilt me
towards the idea that in this case, appearances perhaps do not reflect
reality as it is (as is often the case in science, e.g. life seemed
designed until Darwin).

In other words, to keep a simpler physics, I am willing to give up a simple
account of qualia. This makes qualia into complex, high level properties of
complex processes within minds, but preserves the relative simplicity of
our physical theories as we presently know and understand them.

Is this a price worth paying? It means we still have more explaining to do
regarding qualia, but this is no different a quest than biologists faced
when they gave up on a simple "elan vital" in explaining the unique
properties and abilities of "organic matter."

In the end, the unique abilities and properties if organic matter, to grow,
to initiate movement, to cook rather than melt, turned out not to be due to
a primitive property inherent to organic matter but rather was a matter of
is highly complex organization.

I think the same could be true for qualia: that it's not the result of a
simple primitive, but the result of a complex organization.

Our brain uses this quality to represent conscious knowledge of red things
> with.  Nothing else in the universe has that redness quality.  So, when you
> get to the point of swapping out the first pixel of glutamate/redness
> quality, with anything else, the system must be able to report that it is
> no longer the same redness quality.  Otherwise, it isn't
> functioning sufficiently to have conscious redness and greenness
> qualities.  So, the prediction is, no functionalist will ever be able to
> produce any function, nor anything else, that will result in a redness
> experience, so the substitution will fail.  If this is true, all the
> 'dancing', 'fading', and all the other 'hard problem' contradictions no
> longer exist.  It simply becomes a color problem, which can be resolved
> through experimentally demonstrating which of all our descriptions of stuff
> in the brain is a description of redness.
>
> So, if you understand that, does this argument convince you you must be a
> functionalist, like the majority of people?
>

Do you agree with my assessment of the trade offs? That is, we either have:

1. Simple primitive qualia, but new uncomputable physical laws

Or

2. Simple computable physics, but complex functional organizations
necessary for even simple qualia like red?

If not, is there a third possibility I have overlooked?

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220502/cc175fb1/attachment.htm>


More information about the extropy-chat mailing list