[ExI] GPT-4 on its inability to solve the symbol grounding problem

Jason Resch jasonresch at gmail.com
Wed Apr 5 10:57:18 UTC 2023

On Wed, Apr 5, 2023, 2:36 AM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> As you feel I have slighted you by ignoring your counterpoints, Will, I
> found this concise (thank you) message from you to me...
> >To shorten my above response and give you a simple question to respond
> to, can you show that the 'referents' you speak of are not themselves just
> relations much like an LLM uses?  Do you understand how color vision
> literally works?  I feel like you don't, because if you did, I think you
> would not see much of a difference between the two.  Do you think light is
> some kind of magic color-carrying force?  Past the retina, color is
> condensed into a series of 'this, not that' relations.  The same kind of
> relations that ChatGPT uses."
> I have made no arguments about qualia or colors or about the science of
> color vision or anything similar, which is one reason why I only skimmed
> past your messages about these things. My arguments are about language and
> words and meaning and understanding. It seemed almost as if you thought you
> were addressing someone other than me. However, let me answer this:
> > can you show that the 'referents' you speak of are not themselves just
> relations much like an LLM uses?
> By referents, I mean the things and ideas outside of language to which
> words point. If you hold an apple in your hand and say "this is an apple,"
> the apple is the referent that gives your word "apple" meaning. You might
> also say it is a "red" apple. We can say that your experience of the color
> red exists outside of language, and that when you say the word "red," you
> are pointing to that experience, to that particular aspect of your
> experience of the apple.
> Now, the relations that an LLM uses are merely statistical between and
> among symbols that in themselves have no meaning. In the massive amount of
> text on which an LLM is trained, it will detect for example that the symbol
> "color" often appears in certain ways near the symbol "red" and it can
> detect many other relations with related symbols like "apple," such that it
> can compose what are to us meaningful statements about red apples. But the
> symbols themselves are meaningless outside of the context of their
> referents in the real world, and the LLM has no access to those referents
> as it is trained only on the symbols.
> Does that answer your question?
> Sorry again that I offended you.

For what it's worth I don't think Gordon was intentionally trolling nor
being passive aggressive. There's another explanation that is entirely
innocent, that I will offer. I am not claiming it to necessarily be the
case here, but it is worth mention anyway as it happens frequently and yet
many people are unaware of the phenomenon.

This is a phenomenon we are all subject to and which we should all be aware
of called cognitive dissonance. It can occur whenever our brains encounter
information perceived as threatening to our existing beliefs, almost like
an immune system for the mind. It has the effect of creating blind spots
which literally hide information from conscious processing. We'll skip over
a paragraph as if it wasn't there or invent a reason to stop reading. It's
very difficult to realize when it is happening to us but it happens to
everyone under the right conditions.

I say this only to shed some light on a common occurrence which affects
everyone, in the hope it might explain what can happen when we discuss
ideas that threaten beliefs that are considered fundamental to one's own
identity. When we are aware of this phenomenon we can better understand
when it happens to others we are talking to or even when it is happening in

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230405/bf0459ab/attachment.htm>

More information about the extropy-chat mailing list