[ExI] GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Sun Apr 9 02:34:29 UTC 2023


You think that the red experienced by a person doesn't require a
dictionary. It does.
The dictionary is the association between the perception of a certain range
of light frequencies and the experience of redness. That experience is
simply an activation pattern of neurons because that is what the brain does
when it recognizes and remember something.
It is all in the activation patterns. Now it is possible we are aware of
only the activation patterns that happen in the cortex because these are
the ones that give rise to conscious experiences.
These patterns are similar but probably quite different in detail from
person to person. The way the activation patterns happen, for example, due
to the presence of certain neurotransmitter discharge or levels inside the
brain is quite irrelevant.
Physiology uses many alternative pathways to do the same thing. Glutamate
may be involved, but it doesn't really matter, other
neurotransmitters could have equivalent functions if not enough glutamate
is present and it is not really where the redness you are looking for is
located. It is located in the particular pattern of activation. Also
probably this pattern of activation is different from the memory of red and
this is why it feels different from just remembering red.
All these things are pretty well understood in neuroscience even if not in
as extreme detail as we would like. But the general principle is well
understood. These patterns of activation are unique in each individual
because of how the brain is structured differently from individual to
individual and also how we learned to associate red with this particular
activation pattern. It is something that is partly genetically programmed
and partly based on early exposure to red when after we were born (or even
inside the uterus).
I REALLY, REALLY don't see where the mystery you point out is.
Again, while we may not understand yet all the details we do know what the
neurocorrelate of colors are. Maybe you don't like the idea of
neurocorrelates but to me that is the best scientific understanding we have
on brain stuff.
What else do you expect? It is all in the modeling, this is how you do
science.
The beauty is that if could make a synthetic brain replicating these
correlates then that synthetic model would experience red exactly as a
human does.
This is very testable because I could for example connect part of my brain
with a machine that synthetizes the color processing part of my brain to a
model in a machine and I would still experience the same red as before (or
maybe slightly different if the model is not that good). This is not far
fetched because there are already models like these in real life. Cocheal
implants for example. Or have you heard of that experience where they
implanted a memory chip in a rat? When the chip was on the rat remembered
the layout of a labyrinth it learned previously (they replicated the memory
of it in the chip) and when the chip was off the rat forgot. They even
transferred the chip to a rat that never experienced the labyrinth and when
the chip was on it knew the layout of the labyrinth.
So who cares about glutamate? There was no glutamate in the chip.
Giovanni





On Sat, Apr 8, 2023 at 7:14 PM Giovanni Santostasi via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> So I'm trying to understand if my
> interpretation is right. Let me describe again the picture. There are 3
> panels. In the first panel there is a stylized drawing (it looks almost
> like a cartoon from the 1800s) of a man looking at a strawberry. The
> strawberry outside the man is grey (not sure what that represents but you
> can give it a try). But there is a bubble cartoon coming out of the head of
> the man where the strawberry now is colored in red. In the second panel
> there is a perfect replica of the first panel but with the difference that
> inside the bubble cartoon the strawberry is green. Finally, in the third
> panel there is a robot that is still looking at the grey strawberry but
> inside the bubble cartoon appears the word Red (with capitol R). Can you
> give your best
> interpretation of this picture? Ignore the one I gave previously and come
> up with what you think it is the intent of the creator of this picture.
>
> Based on the description provided, I can offer an interpretation of the
> picture:
>
>    1.
>
>    First panel: The grey strawberry outside the man may represent the
>    objective nature of reality, independent of our perceptions. The red
>    strawberry in the thought bubble signifies the man's personal, subjective
>    experience of the strawberry. By coloring the strawberry red inside the
>    man's thought bubble, the artist may be illustrating that the man perceives
>    the strawberry as red.
>    2.
>
>    Second panel: The second panel, with the green strawberry in the
>    thought bubble, could represent another person's subjective experience of
>    the same object (the strawberry). This panel highlights the possibility
>    that people may perceive the same object in different ways, emphasizing the
>    subjective nature of perception.
>    3.
>
>    Third panel: The robot looking at the grey strawberry with the word
>    "Red" in its thought bubble suggests that the robot is processing the
>    information about the strawberry and identifying its color as "Red."
>    However, since the robot lacks subjective experiences, it can only
>    represent the concept of color using a symbol or word, rather than
>    perceiving the color directly like the human subjects in the first two
>    panels.
>
> Overall, the picture seems to explore the nature of human perception and
> subjective experiences, contrasting them with the more limited, symbolic
> understanding of machines. It emphasizes the unique and personal way that
> humans perceive and experience the world around them, and how this differs
> from the information processing of machines.
>
> On Sat, Apr 8, 2023 at 6:53 PM Giovanni Santostasi <gsantostasi at gmail.com>
> wrote:
>
>> [image: image.png]
>>
>> Brent,
>> I think I understand what you are trying to communicate.
>>
>> I will give it a try. And then ask GPT-4.
>>
>> Giovanni:
>> These pictures try to communicate the direct experience of seeing a
>> strawberry. The strawberry is recognized by the first person (or person is
>> state A). The strawberry is grey outside the person to symbolize that some
>> object with some "qualities", like shape, color, size, and so on but they
>> are not recognized until a conscious entity attributes some particular
>> experience to them. Subject A experiences something in his brain and it is
>> such a direct and immediate experience that cannot be really communicated
>> (well we can try as we do in the picture by "showing" the color red that we
>> hope other humans recognize like red). The second panel shows a subject B
>> that has received some modification in his sensory system (it doesn't
>> matter how this is achieved) where everything that is experienced by A as
>> red is now green. But this experience is also direct and immediate. While
>> different colors the two types of experiences share something in common in
>> being "direct" and immediate. The third subject C that is a machine of some
>> kind doesn't have direct experience but its sensors are tuned in such a way
>> to respond to the stimuli that a normal human would associate with red and
>> it answers with a symbol after processing the sensory data and that symbol
>> is the English word "Red". The idea here is to communicate that there is
>> something fundamentally different between the experiences in A and B and
>> the indirect "experience" or event in C, because C lacks the direct
>> experience and it uses only a symbol to communicate that an object with the
>> quality of redness has been detected.
>>
>> I think this is a good summary of what is being expressed in the drawing.
>> I hope.
>> I disagree though on what really happens, because to me "Red" is also a
>> direct experience. Or much better both A and B do not experience anything
>> direct given there is much processing in perceiving the color red and
>> several layers of hierarchical computing. The experience of the color red
>> is as much a "word", a symbol as much as "Red". It is simply expressed in
>> the natural language of neuronal semantics. The great mystery we believe is
>> in directly experiencing the world is the self-reflectivity part where the
>> entity is aware of its state. This awareness is subjective and unique and
>> it cannot be communicated (I think actually it could be communicated if I
>> could wire my own brain to the other person and in fact, there are many
>> examples of how this could be achieved and maybe even already achieved with
>> recent examples of  brain to brain communication). It looks like a miracle
>> and it is indeed in the sense that it happens, it is, but it is not more a
>> miracle of why anything exists at all. I think the conceptual mistake in
>> the picture is to conclude that "Red" is not an experience. It is a
>> translation of the color red but it is also an experience. If it was
>> not, imagine hearing a pitch, a particular sound vibration at a given
>> frequency, is not that also a direct experience (that doesn't involve a
>> light color)? What is different in this case with experiencing "Red"? Both
>> are labels we assign to a given perception of an external object.
>> Is the issue of thinking about "Red" vs seeing "Red"? The bot could also
>> have an internal representation of a strawberry and its association with
>> "Red" so accessing that representation could invoke the word "Red" and we
>> could argue that is not seeing "Red". But the bot also can distinguish this
>> "memory" of Red and actually experience it right now (maybe given a
>> causality time window).  That is fine because that would be equivalent to
>> humans thinking about the color of a strawberry vs seeing one. The
>> experiences of remembering or seeing or dreaming of a strawberry are
>> distinguishable both in the AI and in humans, so again they are very
>> similar if not identical.
>> In conclusion, what we think are direct experiences are really not (there
>> is a lot of processing, filtering, and association made before they become
>> experienced), and experiencing is basically a form of language that the
>> brain uses to communicate with itself (colors are not that dissimilar to
>> pain or pleasure for example).
>>
>>
>> GPT-4:
>>
>>    1.
>>
>>    Human experience is subjective and personal: The individual
>>    experiences of subjects A and B are unique to them, shaped by their own
>>    sensory systems and personal histories. This subjectivity contributes to
>>    the richness and complexity of human perception.
>>    2.
>>
>>    Machines lack subjective experience: While subject C can process
>>    sensory data and respond with a symbol (the word "Red"), it lacks the
>>    personal and subjective qualities that define human experience. This
>>    results in a more limited and detached understanding of the world.
>>    3.
>>
>>    The role of language in human communication: The use of symbols and
>>    language is crucial for humans to communicate their experiences to others.
>>    However, language can never fully capture the directness and immediacy of
>>    an individual's subjective experience. This limitation is highlighted by
>>    the difference between the experiences of subjects A and B and the symbolic
>>    response of subject C.
>>    4.
>>
>>    The challenge of understanding subjective experiences in others: The
>>    idea that subject B perceives what subject A sees as red as green serves to
>>    underscore the challenge of truly understanding another person's subjective
>>    experience. Although we can use language and symbols to share our
>>    perceptions, there will always be an element of our experiences that
>>    remains unique and personal.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Sat, Apr 8, 2023 at 5:44 PM Will Steinberg via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> Brent, do you think perhaps people might understand the info and images
>>> in your camp from your website, which you have posted probably hundreds od
>>> times, and they just disagree with you?
>>>
>>> On Sat, Apr 8, 2023, 1:50 PM Brent Allsop via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>>
>>>> I keep showing this image, attempting to communicate something:
>>>>
>>>> [image: 3_functionally_equal_machines_tiny.png]
>>>> Sure, our elementary school teacher told us the one on the left is red,
>>>> the one in the middle is green, and the one on the right is just the word
>>>> 'Red'.
>>>>
>>>> But it is evident from all these conversations, that nobody here
>>>> understands the deeper meaning I'm attempting to communicate.
>>>> Some people seem to be getting close, which is nice, but they may not
>>>> yet be fully there.
>>>> If everyone fully understood this, all these conversations would be
>>>> radically different.
>>>> Even if you disagree with me, can anyone describe the deeper meaning
>>>> I'm attempting to communicate with this image?
>>>> What does this image say about qualities, different ways of
>>>> representing information, and different ways of doing computation?
>>>>
>>>> How about this, I'll give $100 worth of Ether, or just USD, to anyone
>>>> who can fully describe the meaning attempting to be portrayed with this
>>>> image.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Sat, Apr 8, 2023 at 10:27 AM Gordon Swobe via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>>
>>>>> On Sat, Apr 8, 2023 at 9:31 AM Jason Resch <jasonresch at gmail.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Sat, Apr 8, 2023, 10:45 AM Gordon Swobe <gordon.swobe at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>> On Sat, Apr 8, 2023 at 3:43 AM Jason Resch via extropy-chat <
>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>
>>>>>>>
>>>>>>>> There is phenomenal consciousness. That I would call awareness of
>>>>>>>> first person non-sharable information concerning one's internal states of
>>>>>>>> mind.
>>>>>>>>
>>>>>>>
>>>>>>> It is this phenomenal consciousness to which I refer. If you do not
>>>>>>> think there something it is like to be a large language model then we have
>>>>>>> no disagreement.
>>>>>>>
>>>>>>
>>>>>> I believe there is something it is like to be for either the LLM, or
>>>>>> something inside it.
>>>>>>
>>>>>
>>>>> Not sure what you mean by something inside it. A philosopher named
>>>>> Thomas Nagel wrote a famous paper titled something like “What is it like to
>>>>> be a bat?” That is the sense that I mean here. Do you think there something
>>>>> it is like to be GPT-4? When you ask it a question and it replies, is it
>>>>> aware of its own private first person experience in the sense that we are
>>>>> aware of our private experience? Or does it have no awareness of any
>>>>> supposed experience?
>>>>>
>>>>> -gts
>>>>>
>>>>>> _______________________________________________
>>>>> extropy-chat mailing list
>>>>> extropy-chat at lists.extropy.org
>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230408/3f17f3c8/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 3_functionally_equal_machines_tiny.png
Type: image/png
Size: 26214 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230408/3f17f3c8/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 30588 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230408/3f17f3c8/attachment-0003.png>


More information about the extropy-chat mailing list