[ExI] GPT-4 on its inability to solve the symbol grounding problem

Jason Resch jasonresch at gmail.com
Wed Apr 12 14:06:39 UTC 2023


On Tue, Apr 11, 2023, 9:32 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Hi Jason,
> Thank you for being such an educated listener, sounding board, indicator
> of whether you can accept this as a falsifiable hypothetical possibility.
> And especially thanks for being willing to think with my assumption that
> physical qualities are elemental.
>

Thank you Brent, I appreciate that. I must also thank you for your patience
and time with my questions.


  I know this is very hard for functionalists, that think functionality is
> more elemental, and that redness can "arise" from function, rather than
> function being implemented with physical redness.  I know most
> functionalists (Giovani, I'm referring to you, for one, but this says more
> about the weakness of this theory, and my ability to describe it, than any
> weakness in a great intellect like Giovani) seem to be unable to do that.
> [image: 3_robots_tiny.png]
>
> You need to be very complete, with what you mean by "functionally
> equivalent"  it must be something that includes the function which
> something like glutamate provides, which is the redness quality.
>

I should point out here, that within functionalism, the question of what
"functional substitution level" is necessary to preserve all the functions
necessary to preserve the mind and it's qualia is an open question, and
according to some, unanswerable. That is to say, we don't know and can't
prove whether we have to simulate the brain at the subatomic level, the
atomic level, the molecular level, the proteins level, the cellular level,
the neuronal level, or the neural network level, etc. in order to preserve
all the functional relationships important to a given mind state.


So that when the above three systems are asked: "What is redness like for
> you."  The brain must be able to be aware of it's redness quality, and
> provide these honest, physically grounded, answers:
> 1. My redness(glutamate) is like your redness(glutamate).
>

How does the first brain know what it's like for the other brain?


2. My redness(glycine) is like your greenness(glycine).
>

Same question.


3. My knowledge is abstract, and not like anything.  It is like the word
> 'red' and I have no idea what a redness quality is like, as I have no
> ability to experience redness.
>

Wouldn't this brain just assume redness is the abstract knowledge of the
word 'red'? How would it ever come to know other brains felt something
different when they looked at the strawberry? (Let's say it experienced
vision as knowledge of a 2D grid where each pixel was populated with the
word representing the color in that position)


Note:  In effect, this is all very similar to the way all chat bots
> accurately respond to questions like this, even if it takes a bit of
> convincing to get them to agree with that.
> Note:  Once we discover what it is, in our brain, which has a redness
> quality, we will be able to endow AGIs with this (like when Cmd Data, in
> Star Trek, received his "emotion chip."  It would then be able to say
> things like: "Oh THAT is what your redness is like."
>

I don't see how that conclusion can ever be reached.


  And it would then fit our definition of phenomenal consciousness.
>
> Asking about a molecule level simulation is a good question.  I haven't
> thought about that as much as the neuron level simulation/substitution
> but there must be some set of molecules (maybe all the molecules that make
> up a bunch of neurons and their neurotransmitters) that is behaving the way
> it does, because of its computationally bindable redness quality.
> An abstract molecule level simulation might be able to behave,
> identically, including making the claim that it's redness was like your
> glutamate redness, but since you could eff the ineffable nature of real
> glutamate, you could objectively know it was achieving those responses, and
> know it was lying.
>

Is it correct to say then, that your beliefs are as follows:
1. A neural-level simulation, lacking the necessary detail of molecular
interactions would deviate from the original by virtue of lacking the
properties of glutamate or other molecules.
2. A molecular-level simulation would have the necessary detail and would
respond identically as the real one, however lacking the genuine redness
properties of real glutamate, such simulations would not actually see red
and would in some sense, visual zombies.

If this is a correct understanding of your views, I think you hold a very
similar position to that of John Searle and his theory of biological
naturalism.

https://en.m.wikipedia.org/wiki/Biological_naturalism

As such, the main philosophical arguments against it relate to the
consistency of full or partial zombies that may result from full or partial
neural substitution or rapidly alternating substitution circuits, as
described here:

https://consc.net/papers/qualia.html

You would know that nothing in its brain was glutamate, and nobody was ever
> able to experience redness, (no matter how you simulated that redness)
> without glutamate.
>

How can we prove there isn't something else like glutamate that also
produces redness? Or maybe a close to red, but only very slightly orangish?
What about a glutamate where one protium nucleus was substituted with one
of deuterium? This is the whole question of multiple realizability.

As I see things, something's properties exist by virtue of that thing's
relationships with other things. If you devise some new framework of
different objects, but preserve all the relationships between them, then
all the same properties exist between them. Think of two isomorphic graphs,
having different vertices but the same edge relations.

This is why I find functionalism appealing, as whatever holds the unique
properties of redness, be it a neural network or a glutamate molecule, an
appropriate simulation can reconstruct a virtual instance of that thing and
also implement all the same relations (and therefore properties) between it
and other virtual things. Thus the simulation, like the isomorphic graph,
by preserving all the same relationships recovers all the same properties.
If the glutamate molecule possesses redness, then the perfect simulation of
glutamate will possess redness too.

To think otherwise leads to a situation where this whole would could be an
atomic detailed simulation, and everything would be the same, you would
still develop your theory of color qualia, we'd still debate Mary the color
scientist, and all the while, we would have done so without anyone in the
world ever having seen red. Is this consistent? Is it possible?

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230412/959dd5d7/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 3_robots_tiny.png
Type: image/png
Size: 26214 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230412/959dd5d7/attachment.png>


More information about the extropy-chat mailing list