[ExI] e: GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Mon Apr 17 19:44:29 UTC 2023


*My argument is about large language models. LLMs, in the purest sense of
that term,, are nothing like such a system. They have no eyes, no ears, no
senses whatsoever to register anything outside of the text. They are
trained only on symbolic text material. From their point of view, (so to
speak), the corpus of text on which they are trained is the entire
universe. *I let Ben elaborate but I think given we are aligned on many
things I do understand what he tries to communicate with that diagram.
1) It is an illusion to think that there is a single grounding object when
we think about a word, even what seems a very solid and concrete word as
"apple". It is actually a complex network of sensations, abstractions,
experiences, and different types of apples that are abstracted away into a
few forms in our heads. The word apple is really a complex network. It
doesn't matter what the linguists say because most of these people are
humanists with very little understanding of neuroscience or other sciences
and in particular zero understanding of how LLMs work or other advanced
concepts in computer science. Their "science" is antiquated and we need
another science of how language works.
2)  LLMs work very similarly to human brains because the connections above
are also present in the LLMs. They do not "refer" to sensory experiences
but other words or symbols (or clusters of words) but link enough of these
words and also loop them back to the original word and you get meaning
exactly in the same way the brain does it. The meaning IS THE COMPLEX
CONNECTIONS.
3) The above idea is not just a good theoretical framework that can show it
works in many different contexts (mathematics, logic, computing) but also
it seems to work in real life given GPT-4 really understands (like some of
my experiments and others) show. If you do cognitive tests that are used to
test humans then GPT-4 has a similar performance to humans at different
levels of development depending on the task. It is an empirical fact and it
cannot be really denied and only excuses can be made to dismiss this
evidence.

Giovanni

On Mon, Apr 17, 2023 at 12:29 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Mon, Apr 17, 2023 at 11:54 AM Ben Zaiboc via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On 17/04/2023 17:49, Gordon Swobe wrote:
>>
>> It is mind-blowing that even after all my attempts to explain Linguistics
>> 101, you guys still fail to understand the meaning of the word "referent."
>>
>>
>> Then, please, explain it to us! (using the relevant discipline, i.e.
>> neuroscience, not linguistics). Presumably you think linguistics is
>> relevant because these systems we're discussing are called "Large Language
>> Models"
>>
>
> Yes, I certainly do believe that linguistics, the scientific study of
> human language, can tell us something about models of human language.
>
>
>> , but having 'language' in the name doesn't mean that 'language' explains
>> how they work. It's all about signal-processing. Unless you think that only
>> brains use signal-processing and not LLMs, or vice-versa.
>>
>
>> So try explaining it in terms that we will understand. Presumably my
>> diagram from several posts ago:
>>
>> *'Grounded' concept*
>>
>> *(The block is a 'real-world' object. What this actually means, I have no
>> good idea)*
>>
>>
>> is inaccurate, and not what you mean at all, so maybe you can explain it
>> in terms of the other diagram:
>>
>> *Linked concept (very simplified)*
>>
>> *(The blue ovals are myriad other concepts, memories, sensory inputs,
>> tokens, etc.)*
>> * Of course, a real diagram of the links would be so dense as to be
>> unreadable. The other ovals would be linked to each other as well as to the
>> centra oval, and it would be 3D with links extending out, as far as the
>> sensory organs, which transduce specific aspects of the 'real world' such
>> as temperature changes, specific frequencies of sound, etc.*
>>
>> Or, if not, then at least in terms of the things that we know to be true
>> about the brain. i.e., nothing in the brain has access to anything except
>> signals from other parts of the brain, and signals from the sense organs,
>> coded in the 'language' our brains use: spike trains. I'm sure you know
>> what spike trains are (giving a stern warning look at Spike here, finger to
>> lips).
>>
>> And, again if you disagree with the statement above, please give your
>> reasons for disagreeing.
>>
>
> Let us say that the diagram above with a "myriad of other concepts etc"
> can accurately model the brain/mind/body with links extending to sensory
> organs and so on. Fine. I can agree with that at least temporarily for the
> sake of argument, but it is beside the point.
>
> My argument is about large language models. LLMs, in the purest sense of
> that term,, are nothing like such a system. They have no eyes, no ears, no
> senses whatsoever to register anything outside of the text. They are
> trained only on symbolic text material. From their point of view, (so to
> speak), the corpus of text on which they are trained is the entire
> universe.
>
> The LLM has no way to understand the meaning of the symbol "potato," for
> example -- that is, it has no way to ground the symbol "potato" -- except
> in terms of other symbols in the text that it also has no way to ground or
> understand. The LLM is, so to speak, trapped in a world of symbolic forms
> with no access to the meanings of those forms. This does not mean it cannot
> manipulate these forms in ways that mimic human understanding, as it was
> trained on a vast amount of formal material written in ways that we find
> understandable and knows the statistics and patterns of English language
> use -- but the meanings of the symbolic forms in its inputs and outputs are
> assigned by us, the human operators.
>
> We can and do assign meanings to these formal symbols, as unlike the LLM,
> we do have access to the world outside of the corpus and so understand the
> meanings of the formal symbols. This can create the appearance that it is
> the LLM that understands and conveys the meanings, but this is an illusion.
> We are projecting our own mental processes onto the LLM.
> ...
> Now, I understand that from here, me might get into theoretical
> discussions about AI robots with electronic sensors and multi-modal LLMs
> and so on, but it would be helpful if people could at least understand that
> LLMs, per se, are unconscious with no true understanding of the world
> exactly as GPT-4 professes to be when asked if it can ground symbols for
> itself.
>
> -gts
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230417/417fd95f/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Vg4IIsd4W9vdlbKU.png
Type: image/png
Size: 2406 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230417/417fd95f/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: mka6l3NG7wwz4q0y.png
Type: image/png
Size: 22536 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230417/417fd95f/attachment-0003.png>


More information about the extropy-chat mailing list