[ExI] e: GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Mon Apr 17 19:22:09 UTC 2023


On Mon, Apr 17, 2023 at 11:54 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On 17/04/2023 17:49, Gordon Swobe wrote:
>
> It is mind-blowing that even after all my attempts to explain Linguistics
> 101, you guys still fail to understand the meaning of the word "referent."
>
>
> Then, please, explain it to us! (using the relevant discipline, i.e.
> neuroscience, not linguistics). Presumably you think linguistics is
> relevant because these systems we're discussing are called "Large Language
> Models"
>

Yes, I certainly do believe that linguistics, the scientific study of human
language, can tell us something about models of human language.


> , but having 'language' in the name doesn't mean that 'language' explains
> how they work. It's all about signal-processing. Unless you think that only
> brains use signal-processing and not LLMs, or vice-versa.
>

> So try explaining it in terms that we will understand. Presumably my
> diagram from several posts ago:
>
> *'Grounded' concept*
>
> *(The block is a 'real-world' object. What this actually means, I have no
> good idea)*
>
>
> is inaccurate, and not what you mean at all, so maybe you can explain it
> in terms of the other diagram:
>
> *Linked concept (very simplified)*
>
> *(The blue ovals are myriad other concepts, memories, sensory inputs,
> tokens, etc.)*
> * Of course, a real diagram of the links would be so dense as to be
> unreadable. The other ovals would be linked to each other as well as to the
> centra oval, and it would be 3D with links extending out, as far as the
> sensory organs, which transduce specific aspects of the 'real world' such
> as temperature changes, specific frequencies of sound, etc.*
>
> Or, if not, then at least in terms of the things that we know to be true
> about the brain. i.e., nothing in the brain has access to anything except
> signals from other parts of the brain, and signals from the sense organs,
> coded in the 'language' our brains use: spike trains. I'm sure you know
> what spike trains are (giving a stern warning look at Spike here, finger to
> lips).
>
> And, again if you disagree with the statement above, please give your
> reasons for disagreeing.
>

Let us say that the diagram above with a "myriad of other concepts etc" can
accurately model the brain/mind/body with links extending to sensory organs
and so on. Fine. I can agree with that at least temporarily for the sake of
argument, but it is beside the point.

My argument is about large language models. LLMs, in the purest sense of
that term,, are nothing like such a system. They have no eyes, no ears, no
senses whatsoever to register anything outside of the text. They are
trained only on symbolic text material. From their point of view, (so to
speak), the corpus of text on which they are trained is the entire
universe.

The LLM has no way to understand the meaning of the symbol "potato," for
example -- that is, it has no way to ground the symbol "potato" -- except
in terms of other symbols in the text that it also has no way to ground or
understand. The LLM is, so to speak, trapped in a world of symbolic forms
with no access to the meanings of those forms. This does not mean it cannot
manipulate these forms in ways that mimic human understanding, as it was
trained on a vast amount of formal material written in ways that we find
understandable and knows the statistics and patterns of English language
use -- but the meanings of the symbolic forms in its inputs and outputs are
assigned by us, the human operators.

We can and do assign meanings to these formal symbols, as unlike the LLM,
we do have access to the world outside of the corpus and so understand the
meanings of the formal symbols. This can create the appearance that it is
the LLM that understands and conveys the meanings, but this is an illusion.
We are projecting our own mental processes onto the LLM.
...
Now, I understand that from here, me might get into theoretical discussions
about AI robots with electronic sensors and multi-modal LLMs and so on, but
it would be helpful if people could at least understand that LLMs, per se,
are unconscious with no true understanding of the world exactly as GPT-4
professes to be when asked if it can ground symbols for itself.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230417/4f7dc212/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Vg4IIsd4W9vdlbKU.png
Type: image/png
Size: 2406 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230417/4f7dc212/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: mka6l3NG7wwz4q0y.png
Type: image/png
Size: 22536 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230417/4f7dc212/attachment-0003.png>


More information about the extropy-chat mailing list