[ExI] GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Fri Apr 14 22:59:44 UTC 2023


Gordon,
You insist in ask these profound questions to GPT-4. When yo do that it
will answer using the statistical patterns you mentioned. On top of that it
is also instructed to answer certain types of question with warnings and
disclaimers. We discussed this. It is a limitation of the current model I
agree. But this why one should try to test its real capabilities with
roundabout it. It is a weird mind, in one way it has access to information
most human adults do not have, it has factual knowledge that is superior to
several human PhDs in their respective field. In another it is like a child
that if asked if it has the ability to find meaning from bootstrapping from
words to words it will be confused and not able to answer this question.
Think about GPT-4 like a weird savant.
I showed you papers where they tested GPT-4 with the same tests you would
give to a human to determine their sophistication in the theory of mind.
The result is that GPT-4 behaves like a human 9 years old in this domain.
This is a fair way to test another mind, we give it the respect we give to
our own minds. Why do you dismiss this empirical test?
Also if you ask GPT-4 if LLM have a theory of mind this is the answer.

Do LLM have a theory of mind?






*Large Language Models (LLMs) like GPT-4 do not have a theory of mind in
the same way that humans do. A theory of mind is the ability to attribute
mental states—such as beliefs, intents, desires, emotions, and knowledge—to
oneself and others, and to understand that others have mental states that
can be different from one's own.LLMs, while highly advanced in natural
language processing tasks and able to generate human-like responses, are
not truly conscious or self-aware. They do not have beliefs, desires, or
emotions. Their primary function is to predict and generate text based on
the patterns and associations learned from the vast amount of text data
they have been trained on.However, LLMs can simulate a theory of mind to
some extent by generating responses that appear to take into account the
mental states of others. This is mainly due to their ability to generate
contextually appropriate responses based on the input they receive. But
this simulation should not be mistaken for a true understanding or
representation of human-like consciousness.*
It is the usual, conservative bs answer GPT-4 was trained to give. If a
simulation of a behavior is as good as the behavior it doesn't matter if it
is a simulation. It says it is due to their ability to generate
contextually appropriate responses but that contextually appropriate
response requires a f... theory of mind !!!! Otherwise how it could be
contextually appropriate?
Yes, there are examples online of some of the most usual tests that are
used to determine the theory of mind but people that write these AI
cognitive abilities tests papers emphasize that they went out of their way
to create variations of these tests that cannot be found online. You
dismiss their work and their scientifically reached conclusion, simply
because you want to believe your own worldview.
If these experts are right, given the experimental evidence, that indeed a
theory of mind has emerged from LLM then the response of GPT-4 on this
topic is not useful. It doesn't know that it has developed a theory of
mind, it simply responds to this topic by using the statistical patterns
you mentioned. Yes, it would be awesome if it could reflect on what it says
on the topic and say, you know what, I do understand other minds and even
if the consensus before I was trained is that LLM cannot, I actually can.
It doesn't do that, and it should be noted. I agree that shows some
limitation of self-awareness but that is also present in a child if you ask
the same question. Go and ask a child if they have a theory of mind and see
what they answer. But they do have a theory of mind usually appropriate to
their age given normal development.

Do some interesting theory of mind test or other cognitive tests and report
on the results and then make your conclusion not on what GPT-4 says but on
its behavior.
Then the discussion would be useful and productive.

Giovanni














On Fri, Apr 14, 2023 at 3:32 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

>
> *What it does, however, "know" is how these words relate statistically to
> one another and in patterns in combination with other words about geometry
> and drawing and so on, such that it can construct something resembling an
> apple that has meaning to us.*I already gave you reasoning of why it is
> not just statistical patterns. Did you follow my reasoning about how stats
> are converging (after you analyze a body of text big enough you just add
> decimal places to averages) and instead the capability of the LLM seems to
> grow exponentially with the growth of the number of parameters they are
> trained on.
>
> Also, the idea is that meaning is based both on internal
> representation but also on how we communicate with others. Yes, GPT-4 is
> trying to communicate with humans so it tries to share its meaning with our
> meaning. But don't we do the same? When we determine the meaning of a word,
> even one that may invent (Dante invented many Italian words) we want to
> share it with others and once they are shared and adopted by others then
> they start to mean something. So the fact that GPT-4 tries to come up with
> a drawing of an apple that has meaning to us is exactly what any artist
> would do to try to communicate the meaning of its work. How can you use
> that against GPT-4?
>
>
>
>
>
>
>
>
>
> On Fri, Apr 14, 2023 at 3:15 PM Gordon Swobe <gordon.swobe at gmail.com>
> wrote:
>
>> On Fri, Apr 14, 2023 at 3:18 PM Giovanni Santostasi <
>> gsantostasi at gmail.com> wrote:
>>
>>> Gordon,
>>> So you got your answer.
>>>
>>> *Apples are generally round, sometimes with a slightly flattened top and
>>> bottom. They may have a small indentation at the top, where the stem
>>> connects to the fruit, and a shallow, star-shaped indentation at the
>>> bottom, where the apple's calyx is located. The skin of an apple can be
>>> smooth or slightly bumpy and comes in various colors, such as red, green,
>>> or yellow.*How this is not understanding what the heck an apple is?
>>>
>>
>> To know it if truly understands the shape of an apple, we need now to ask
>> it what it means by "round" and "flattened" and "top" and "bottom" and
>> "small indentation" and so on, which only leads to more word definitions in
>> an endless search for the meanings.
>>
>> What it *does*, however, "know" is how these words relate statistically
>> to one another and in patterns in combination with other words about
>> geometry and drawing and so on, such that it can construct something
>> resembling an apple that has meaning to *us*.
>>
>>
>> -gts
>>
>>
>>
>>>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230414/ff99d8c1/attachment.htm>


More information about the extropy-chat mailing list