[ExI] GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Thu Apr 6 05:48:51 UTC 2023

On Wed, Apr 5, 2023 at 9:50 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

"In the absence of the things they mean, they have no meaning" -- This I
> disagree with. If two English speakers survived while the rest of the
> universe disappeared completely, the two speakers could still carry on a
> meaningful conversation. Their words would still mean things to them.

I'm sorry but that is not the point. My statement was merely a casual way
of saying that words have referents, that those referents give them
meaning, and that without those referents they are meaningless. The English
speakers in your example have referents for their words in their minds and

Giovanni apparently does not like or understand the concept. I think the
former, as it is integral to the argument that LLMs have no access to the
meanings of words in the texts on which they are trained.

Unlike the English speakers in your example, an LLM has no access to the
referents for the words on which it is trained. It can do no more than
analyze the statistical relationships and patterns between and among them
and make predictions about future words and patterns, which by the way is
*exactly what GPT-4 says it does." GPT-4 says I am quite accurate to call
it an unconscious, highly sophisticated autocomplete feature similar to but
more powerful that what is found in any word processing application.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230405/0d957775/attachment.htm>

More information about the extropy-chat mailing list