# [ExI] e: GPT-4 on its inability to solve the symbol grounding problem

Jason Resch jasonresch at gmail.com
Sun Apr 16 12:36:29 UTC 2023

```On Sun, Apr 16, 2023, 3:03 AM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> On Sat, Apr 15, 2023 at 10:47 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> I disagree completely.
>
> The LLM can learn patterns, yes, (this is part of what GPT does and
> reports that it does) but no, it cannot learn what the parts of speech
> mean, nor does it claim to know the parts of speech.
>
> > Numbers are often listed in a certain sequence (e.g. in ascending
> order), often used before nouns (can infer it gives a count).
>
> What is a number?
>

A commonly observed symbol, occasionally found in lists: sequences
delimited by newline characters, and in the form:
1. X
2. Y
3. Z
The model recognizes this order is nearly always preserved, but has no
apparent upper bound. The model recognizes the plural nature of some verbs,
e.g., "the boy and man are skiing" vs. "the boy is skiing", it also sees
"the one boy is skiing" and "the two boys are skiing" note the use of 'is'
with a singular, or 'one' , but 'are' when two or more nouns are connected,
or when 'two' is used. The model often sees follow up sentences refer to
groups with a number, for example: "The mother, father and child strolled.
The three had a great time." The model can begin to directly associate
numbers with the count of nouns given.

Do you at least agree there's sufficient information in text to learn the
meaning of the word 'two'?

What is a noun?
>

A noun is a word that the model can categorize as special as they're the
only ones that follow articles: a, an, the.

A person, place or thing, you say? What is a person,
>

A noun that's often the subject of a sentence (a noun appearing before the
verb)

a place,
>

A noun that persons or things are often said to be 'in', or 'at'.

or a thing?
>

A noun that's more often the object of a verb, i.e. it follows the verb in
a sentence: "the man kicked the stone"

See all these patterns are present in text, there are rules and categories
that emerge purely from an analysis of sequences of words that are seen or
not seen.

> > Words that can stand alone as sentences are verbs.
>
> What is a verb?
>

A verb is a word that the model can categorize as special as they're the
only ones that can stand alone in a sentence.

(Words and sentences defined below)

Action? What is action?
>

A verb that follows a noun.

For that matter, what is a sentence?
>

Sequences delimited by symbols: ., !, ?
Often containing less than a dozen or so  recurring character sequences
(words) of which there are about a million, all from an alphabet of 26
symbols, and almost always containing at least one occupance of one of 6 of
those 26 symbols. These sequences of symbols are delimited by the symbol: '
'

> Likewise with all other parts of speech. Even if it can classify every
> noun as belonging to a certain set of symbols X, and every verb as
> belonging to another set of symbols Y, it could still never know what is a
> noun or a verb. It can know only the pattern of how these classes of
> symbols tend to appear together .
>

Do you agree with my example of learning the meaning of the word 'two' and
other numbers? From seeing lists and the number of nouns in sentences
paired with numbers?

If so, then couldn't the model learn the word 'count' from examples like:
"He counted 'one, two, three'"

Could it learn the word 'vowel' from all the lists it sees associated with
that word, and connect the meaning with those six symbols it finds at least
one of in every word? 'a, e, I, o, u, y'

> GPT-4 can learn only the patterns and relationships between and among
> word-symbols with no knowledge of the meanings of the individual words, *exactly
> as it reports that it does*. It does this extremely well, and it is in
> this way that it can *simulate* human understanding.
>

It seems to me that you are putting all your effort into seeing how it
couldn't be possible rather than putting in all your effort into seeing how
it could.

Think of all the data points and structures available to it to make: every
word pair and word order frequency, sorted into a list. A high dimensional
word proximity space, with related words clustered into various groups.
Words arranged into huge hierarchical tree structures based on connections
by intermediate words like "is", "of", "has", etc.

You'll ask, but how does it get started, I gave you plenty of examples in

Try as an exercise, thinking about how it could work.

Spend 10 minutes putting yourself in the shoes of someone in a Korean
library (with no pictures or translations) given thousands of years to
figure out what any of the symbols mean.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230416/f1bc5cb4/attachment-0001.htm>
```