[ExI] GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Sun Apr 9 01:53:34 UTC 2023


[image: image.png]

Brent,
I think I understand what you are trying to communicate.

I will give it a try. And then ask GPT-4.

Giovanni:
These pictures try to communicate the direct experience of seeing a
strawberry. The strawberry is recognized by the first person (or person is
state A). The strawberry is grey outside the person to symbolize that some
object with some "qualities", like shape, color, size, and so on but they
are not recognized until a conscious entity attributes some particular
experience to them. Subject A experiences something in his brain and it is
such a direct and immediate experience that cannot be really communicated
(well we can try as we do in the picture by "showing" the color red that we
hope other humans recognize like red). The second panel shows a subject B
that has received some modification in his sensory system (it doesn't
matter how this is achieved) where everything that is experienced by A as
red is now green. But this experience is also direct and immediate. While
different colors the two types of experiences share something in common in
being "direct" and immediate. The third subject C that is a machine of some
kind doesn't have direct experience but its sensors are tuned in such a way
to respond to the stimuli that a normal human would associate with red and
it answers with a symbol after processing the sensory data and that symbol
is the English word "Red". The idea here is to communicate that there is
something fundamentally different between the experiences in A and B and
the indirect "experience" or event in C, because C lacks the direct
experience and it uses only a symbol to communicate that an object with the
quality of redness has been detected.

I think this is a good summary of what is being expressed in the drawing. I
hope.
I disagree though on what really happens, because to me "Red" is also a
direct experience. Or much better both A and B do not experience anything
direct given there is much processing in perceiving the color red and
several layers of hierarchical computing. The experience of the color red
is as much a "word", a symbol as much as "Red". It is simply expressed in
the natural language of neuronal semantics. The great mystery we believe is
in directly experiencing the world is the self-reflectivity part where the
entity is aware of its state. This awareness is subjective and unique and
it cannot be communicated (I think actually it could be communicated if I
could wire my own brain to the other person and in fact, there are many
examples of how this could be achieved and maybe even already achieved with
recent examples of  brain to brain communication). It looks like a miracle
and it is indeed in the sense that it happens, it is, but it is not more a
miracle of why anything exists at all. I think the conceptual mistake in
the picture is to conclude that "Red" is not an experience. It is a
translation of the color red but it is also an experience. If it was
not, imagine hearing a pitch, a particular sound vibration at a given
frequency, is not that also a direct experience (that doesn't involve a
light color)? What is different in this case with experiencing "Red"? Both
are labels we assign to a given perception of an external object.
Is the issue of thinking about "Red" vs seeing "Red"? The bot could also
have an internal representation of a strawberry and its association with
"Red" so accessing that representation could invoke the word "Red" and we
could argue that is not seeing "Red". But the bot also can distinguish this
"memory" of Red and actually experience it right now (maybe given a
causality time window).  That is fine because that would be equivalent to
humans thinking about the color of a strawberry vs seeing one. The
experiences of remembering or seeing or dreaming of a strawberry are
distinguishable both in the AI and in humans, so again they are very
similar if not identical.
In conclusion, what we think are direct experiences are really not (there
is a lot of processing, filtering, and association made before they become
experienced), and experiencing is basically a form of language that the
brain uses to communicate with itself (colors are not that dissimilar to
pain or pleasure for example).


GPT-4:

   1.

   Human experience is subjective and personal: The individual experiences
   of subjects A and B are unique to them, shaped by their own sensory systems
   and personal histories. This subjectivity contributes to the richness and
   complexity of human perception.
   2.

   Machines lack subjective experience: While subject C can process sensory
   data and respond with a symbol (the word "Red"), it lacks the personal and
   subjective qualities that define human experience. This results in a more
   limited and detached understanding of the world.
   3.

   The role of language in human communication: The use of symbols and
   language is crucial for humans to communicate their experiences to others.
   However, language can never fully capture the directness and immediacy of
   an individual's subjective experience. This limitation is highlighted by
   the difference between the experiences of subjects A and B and the symbolic
   response of subject C.
   4.

   The challenge of understanding subjective experiences in others: The
   idea that subject B perceives what subject A sees as red as green serves to
   underscore the challenge of truly understanding another person's subjective
   experience. Although we can use language and symbols to share our
   perceptions, there will always be an element of our experiences that
   remains unique and personal.














On Sat, Apr 8, 2023 at 5:44 PM Will Steinberg via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Brent, do you think perhaps people might understand the info and images in
> your camp from your website, which you have posted probably hundreds od
> times, and they just disagree with you?
>
> On Sat, Apr 8, 2023, 1:50 PM Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> I keep showing this image, attempting to communicate something:
>>
>> [image: 3_functionally_equal_machines_tiny.png]
>> Sure, our elementary school teacher told us the one on the left is red,
>> the one in the middle is green, and the one on the right is just the word
>> 'Red'.
>>
>> But it is evident from all these conversations, that nobody here
>> understands the deeper meaning I'm attempting to communicate.
>> Some people seem to be getting close, which is nice, but they may not yet
>> be fully there.
>> If everyone fully understood this, all these conversations would be
>> radically different.
>> Even if you disagree with me, can anyone describe the deeper meaning I'm
>> attempting to communicate with this image?
>> What does this image say about qualities, different ways of representing
>> information, and different ways of doing computation?
>>
>> How about this, I'll give $100 worth of Ether, or just USD, to anyone who
>> can fully describe the meaning attempting to be portrayed with this image.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Sat, Apr 8, 2023 at 10:27 AM Gordon Swobe via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>> On Sat, Apr 8, 2023 at 9:31 AM Jason Resch <jasonresch at gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On Sat, Apr 8, 2023, 10:45 AM Gordon Swobe <gordon.swobe at gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>> On Sat, Apr 8, 2023 at 3:43 AM Jason Resch via extropy-chat <
>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>
>>>>>
>>>>>> There is phenomenal consciousness. That I would call awareness of
>>>>>> first person non-sharable information concerning one's internal states of
>>>>>> mind.
>>>>>>
>>>>>
>>>>> It is this phenomenal consciousness to which I refer. If you do not
>>>>> think there something it is like to be a large language model then we have
>>>>> no disagreement.
>>>>>
>>>>
>>>> I believe there is something it is like to be for either the LLM, or
>>>> something inside it.
>>>>
>>>
>>> Not sure what you mean by something inside it. A philosopher named
>>> Thomas Nagel wrote a famous paper titled something like “What is it like to
>>> be a bat?” That is the sense that I mean here. Do you think there something
>>> it is like to be GPT-4? When you ask it a question and it replies, is it
>>> aware of its own private first person experience in the sense that we are
>>> aware of our private experience? Or does it have no awareness of any
>>> supposed experience?
>>>
>>> -gts
>>>
>>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230408/059701a1/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 3_functionally_equal_machines_tiny.png
Type: image/png
Size: 26214 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230408/059701a1/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 30588 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230408/059701a1/attachment-0003.png>


More information about the extropy-chat mailing list