[ExI] e: GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Mon Apr 17 01:42:58 UTC 2023


*To know the difference, it must have a deeper understanding of number,
beyond the mere symbolic representations of them. This is to say it must
have access to the referents, to what we really *mean* by numbers
independent of their formal representations.*What are you talking about?
*“1, 2, 3, 4, Spring, Summer, Fall, Winter” and this pattern is repeated
many times.   *
Yeah, this is not enough to make the connection Spring==1, Summer==2 but if
I randomize the pattern 1,3,4,2, Spring, Fall, Winter, Summer, and then
another randomization eventually the LLM will make the connection.

On Sun, Apr 16, 2023 at 3:57 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Sun, Apr 16, 2023 at 2:07 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>
> To ground the symbol "two" or any other number -- to truly understand that
>>> the sequence is a sequence of numbers and what are numbers -- it needs
>>> access to the referents of numbers which is what the symbol grounding
>>> problem is all about. The referents exist outside of the language of
>>> mathematics.
>>>
>>
>> But they aren't outside the patterns within language and the corpus of
>> text it has access to.
>>
>
>
> But they are. Consider a simplified hypothetical in which the entire
> corpus is
>
> “1, 2, 3, 4, Spring, Summer, Fall, Winter” and this pattern is repeated
> many times.
>
> How does the LLM know that the names of the seasons do not represent the
> numbers 5, 6, 7, 8? Or that the numbers 1-4 to not represent four more
> mysterious seasons?
>
> To know the difference, it must have a deeper understanding of number,
> beyond the mere symbolic representations of them. This is to say it must
> have access to the referents, to what we really *mean* by numbers
> independent of their formal representations.
>
> That is why I like the position of mathematical platonists who say we can
> so-to-speak “see” the meanings of numbers — the referents — in our
> conscious minds. Kantians say the essentially the same thing.
>
>
> Consider GPT having a sentence like:
>>  "This sentence has five words”
>>
>> Can the model not count the words in a sentence like a child can count
>> pieces of candy? Is that sentence not a direct referent/exemplar for a set
>> of cardinality of five?
>>
>
> You seem to keep assuming a priori knowledge that the model does not have
> before it begins its training. How does it even know what it means to count
> without first understanding the meanings of numbers?
>
> I think you did something similar some weeks ago when you assumed it could
> learn the meanings of words with only a dictionary and no knowledge of the
> meanings of any of the words within it.
>
>
>>>>
>> But AI can't because...?
>> (Consider the case of Hellen Keller in your answer)
>>
>
>
> An LLM can’t because it has no access to the world outside of formal
> language and symbols, and that is where the referents that give meaning to
> the symbols are to be found.
>
> -gts
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230416/cd19e2d3/attachment.htm>


More information about the extropy-chat mailing list