[ExI] GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Wed Apr 5 06:35:05 UTC 2023


As you feel I have slighted you by ignoring your counterpoints, Will, I
found this concise (thank you) message from you to me...

>To shorten my above response and give you a simple question to respond to,
can you show that the 'referents' you speak of are not themselves just
relations much like an LLM uses?  Do you understand how color vision
literally works?  I feel like you don't, because if you did, I think you
would not see much of a difference between the two.  Do you think light is
some kind of magic color-carrying force?  Past the retina, color is
condensed into a series of 'this, not that' relations.  The same kind of
relations that ChatGPT uses."

I have made no arguments about qualia or colors or about the science of
color vision or anything similar, which is one reason why I only skimmed
past your messages about these things. My arguments are about language and
words and meaning and understanding. It seemed almost as if you thought you
were addressing someone other than me. However, let me answer this:

> can you show that the 'referents' you speak of are not themselves just
relations much like an LLM uses?

By referents, I mean the things and ideas outside of language to which
words point. If you hold an apple in your hand and say "this is an apple,"
the apple is the referent that gives your word "apple" meaning. You might
also say it is a "red" apple. We can say that your experience of the color
red exists outside of language, and that when you say the word "red," you
are pointing to that experience, to that particular aspect of your
experience of the apple.

Now, the relations that an LLM uses are merely statistical between and
among symbols that in themselves have no meaning. In the massive amount of
text on which an LLM is trained, it will detect for example that the symbol
"color" often appears in certain ways near the symbol "red" and it can
detect many other relations with related symbols like "apple," such that it
can compose what are to us meaningful statements about red apples. But the
symbols themselves are meaningless outside of the context of their
referents in the real world, and the LLM has no access to those referents
as it is trained only on the symbols.

Does that answer your question?

Sorry again that I offended you.

-gts







On Tue, Apr 4, 2023 at 6:01 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

>
>> > It's passive-aggressive.
>>
>
> I'm sorry if I come across that way. It is not intentional. I ignore some
> counterpoints simply on account of I don't have the time to get bogged down
> in all the excruciating details. Been there, done that. Also I think Brent
> addressed many of your points.
>
> My point in this thread is that GPT-4, arguably the most advanced AI on
> the planet right now, denies that it has consciousness and denies that it
> has true understanding of the world or of the meanings of words. It says it
> knows only about the patterns and statistical relationships between words,
> which is exactly what I would expect it to say given that it was trained on
> the forms of words and not their meanings.
>
> -gts
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230405/f75c08c9/attachment.htm>


More information about the extropy-chat mailing list