[ExI] e: GPT-4 on its inability to solve the symbol grounding problem
Gordon Swobe
gordon.swobe at gmail.com
Sun Apr 16 22:55:24 UTC 2023
On Sun, Apr 16, 2023 at 2:07 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
To ground the symbol "two" or any other number -- to truly understand that
>> the sequence is a sequence of numbers and what are numbers -- it needs
>> access to the referents of numbers which is what the symbol grounding
>> problem is all about. The referents exist outside of the language of
>> mathematics.
>>
>
> But they aren't outside the patterns within language and the corpus of
> text it has access to.
>
But they are. Consider a simplified hypothetical in which the entire corpus
is
“1, 2, 3, 4, Spring, Summer, Fall, Winter” and this pattern is repeated
many times.
How does the LLM know that the names of the seasons do not represent the
numbers 5, 6, 7, 8? Or that the numbers 1-4 to not represent four more
mysterious seasons?
To know the difference, it must have a deeper understanding of number,
beyond the mere symbolic representations of them. This is to say it must
have access to the referents, to what we really *mean* by numbers
independent of their formal representations.
That is why I like the position of mathematical platonists who say we can
so-to-speak “see” the meanings of numbers — the referents — in our
conscious minds. Kantians say the essentially the same thing.
Consider GPT having a sentence like:
> "This sentence has five words”
>
> Can the model not count the words in a sentence like a child can count
> pieces of candy? Is that sentence not a direct referent/exemplar for a set
> of cardinality of five?
>
You seem to keep assuming a priori knowledge that the model does not have
before it begins its training. How does it even know what it means to count
without first understanding the meanings of numbers?
I think you did something similar some weeks ago when you assumed it could
learn the meanings of words with only a dictionary and no knowledge of the
meanings of any of the words within it.
>>>
> But AI can't because...?
> (Consider the case of Hellen Keller in your answer)
>
An LLM can’t because it has no access to the world outside of formal
language and symbols, and that is where the referents that give meaning to
the symbols are to be found.
-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230416/89a15d86/attachment-0001.htm>
More information about the extropy-chat
mailing list