# [ExI] e: GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Mon Apr 17 22:33:53 UTC 2023

```*Is this the kind of stuff you are saying is all that really happens in
real life?*Yes, I have to run to do real stuff in the real world �� but
when I come back I will answer more completely.

But I kind of did when I talked about how physics thinks about stuff and it
is all in the relationships. An electron is really what an electron does.
We know what an electron is by the way it responds (interacts, so basically
a relationship) with an electromagnetic field. How do we know what a
magnetic field is? By putting electrons in it and see how they respond to
the electromagnetic field. It seems a tautology and it is but I also
explained how to get out of this apparent tautology. If we can do this with
physics that is our most deepest and most complete understanding of the
universe do you think we cannot do with common language or any other
knowledge system?
Giovanni

On Mon, Apr 17, 2023 at 3:26 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Hi Giovani,
>
> I'm trying to get my head around this view that all there are is
> relationships.
>
> My normal thinking is, there is the subject, I.  There is the object, the
> ball.  Then there is the programmed relationship;  I throw.
> I, is a label for my body.  The ball is a round object that fits in my
> hand.  And "throw" is a label for a set of programming that defines the
> relationship (what am I going to do to the ball?)
> For me, it is the computational binding which contains all the diverse
> sets of programmed, or meaningful relationships.  For me, you still need
> the objective, for the relationships to be meaningful.
>
> So I'm trying to figure out how one might achieve all that with
> relationships, only.
> So I could be defined as the one doing the throwing.  And the ball could
> be defined as what is being thrown.
> And I guess a red property could be defined as a property of something
> that is different from something that is green.
> Is this the kind of stuff you are saying is all that really happens in
> real life?
>
>
>
>
>
> On Mon, Apr 17, 2023 at 3:24 PM Giovanni Santostasi via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>>
>>
>>
>> *Start with three main components:a. Sensorimotor experience (perception
>> and action)b. Symbolic representation (language, symbols)c. Grounding (the
>> process that connects symbols to experience)*Gordon,
>> I have already said asking things directly to GPT-4 it is not always the
>> best way to test its knowledge of itself. But you are using it as a tool to
>> help you with your imagination and express your ideas, that is ironic but
>> fine.
>> The problem with the above is the "process that connects symbols to
>> experience". But how do you get symbols in the first place?
>> Do you realize that making symbols is the exact opposite of "grounding"?
>> It is the exact opposite process because you may start with a "real"
>> apple in front of you to a generalization of apple that is based on seeing
>> many apples, excluding other objects from the apple category and so on. You
>> need to compare apples to apples (see what I did there?) and apples to
>> oranges to understand what an apple is. It is about the relationships, the
>> connections. The grounding you talk about is a possible way to do (that is
>> relevant for evolutionary purposes) but it is not the ONLY way to do it.
>> Any relationship would do.
>> I don't have time right now but later I will do an experiment. I will
>> come up with invented objects that have certain common characteristics,
>> like edges (basically geometrical shapes). Then train an AI to create
>> categories based on whatever the AI things is in common among these
>> objects. I bet you that it will be able to create categories just from
>> comparing these objects with each other even if these objects are abstract
>> (I said shapes but they can be described by a set of numbers or even a rule
>> to make them). Once you have the categories you can assign words to them so
>> we know what we talk about and then have a convo.
>> Language can be built from relationships without referents to the real
>> world or better said the only way to build meaning in language is to rely
>> on the relationships. The grounding process you have in mind, Gordon, is
>> not what really happens in reality. Maybe the grounding you imagine gives
>> you solace but it is not really what happens in real life.
>>
>> Giovanni
>>
>>
>>
>>
>>
>> On Mon, Apr 17, 2023 at 1:58 PM Gordon Swobe via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> On Mon, Apr 17, 2023 at 1:58 PM Ben Zaiboc via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>> I suppose that's what I want, a graphical representation
>>>> of what you mean by 'grounding', incorporating these links.
>>>
>>>
>>> Not sure how to do it incorporating your links. I started scratching my
>>> head trying to think of the best way to diagram it, then it occured to me
>>> to ask GPT-4. It certainly "understands" the symbol grounding problem and
>>> why it cannot solve it for itself. Here is its solution.
>>>
>>> Prompt:
>>> How would you draw a diagram of symbol grounding to make it easy to
>>> understand?
>>>
>>> Continuation:
>>> To create an easily understandable diagram of symbol grounding, follow
>>> these steps:
>>>
>>> a. Sensorimotor experience (perception and action)
>>> b. Symbolic representation (language, symbols)
>>> c. Grounding (the process that connects symbols to experience)
>>>
>>> Draw three circles, one for each component, and label them accordingly:
>>> a. A circle for "Sensorimotor Experience" at the bottom left
>>> b. A circle for "Symbolic Representation" at the bottom right
>>> c. A circle for "Grounding" at the top center
>>>
>>> Connect the circles with arrows to represent the flow of information:
>>> a. Draw an arrow from "Sensorimotor Experience" to "Grounding"
>>> b. Draw an arrow from "Symbolic Representation" to "Grounding"
>>> c. Draw an arrow from "Grounding" back to both "Sensorimotor Experience"
>>> and "Symbolic Representation"
>>>
>>> Add brief descriptions to each arrow to clarify the processes:
>>> a. Arrow from "Sensorimotor Experience" to "Grounding": "Extraction of
>>> features and patterns"
>>> b. Arrow from "Symbolic Representation" to "Grounding": "Association of
>>> symbols with features and patterns"
>>> c. Arrow from "Grounding" back to "Sensorimotor Experience": "Guided
>>> perception and action"
>>> d. Arrow from "Grounding" back to "Symbolic Representation": "Generation
>>> of meaningful symbols"
>>>
>>> Add examples to make the diagram more relatable:
>>> a. Next to "Sensorimotor Experience", you can add an image of a cat and
>>> a person observing the cat
>>> b. Next to "Symbolic Representation", you can write the word "cat" or
>>> draw a simple icon representing a cat
>>> c. Add an image of a brain or a thinking cloud in the "Grounding" circle
>>>
>>> With these steps, you will create a diagram that visually represents the
>>> concept of symbol grounding, making it easier to understand. The examples
>>> and descriptions help clarify the connections between sensorimotor
>>> experiences, symbolic representations, and the grounding process.
>>>
>>> -GPT-4
>>> ---
>>>
>>> -gts
>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230417/cccf21b9/attachment.htm>
```