[ExI] GPT-4 on its inability to solve the symbol grounding problem
Gordon Swobe
gordon.swobe at gmail.com
Wed Apr 5 13:28:14 UTC 2023
Thanks Jason yes I certainly was not trolling. If you are saying I skipped
over anything, I think it was not on account of cognitive dissonance, (a
term I think most people here understand), but rather because Will’s
writing about color perception looked to me like part of the never-ending
debate about qualia which I debated here until I was blue in the face about
15 years ago. I had made a conscious decision not to get embroiled in that
again, and it looked like Brent had taken up the torch.
The intention of this thread was to explore what GPT-4 says about itself.
Apparently, it understands language models in the same way I understand
them. ChatGPT says it is not conscious and that it does not understand the
meanings of words. It merely understands the statistical relations between
words and is very good at predicting which words will be most meaningful to
us.
-gts
On Wed, Apr 5, 2023 at 5:05 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> On Wed, Apr 5, 2023, 2:36 AM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> As you feel I have slighted you by ignoring your counterpoints, Will, I
>> found this concise (thank you) message from you to me...
>>
>> >To shorten my above response and give you a simple question to respond
>> to, can you show that the 'referents' you speak of are not themselves just
>> relations much like an LLM uses? Do you understand how color vision
>> literally works? I feel like you don't, because if you did, I think you
>> would not see much of a difference between the two. Do you think light is
>> some kind of magic color-carrying force? Past the retina, color is
>> condensed into a series of 'this, not that' relations. The same kind of
>> relations that ChatGPT uses."
>>
>> I have made no arguments about qualia or colors or about the science of
>> color vision or anything similar, which is one reason why I only skimmed
>> past your messages about these things. My arguments are about language and
>> words and meaning and understanding. It seemed almost as if you thought you
>> were addressing someone other than me. However, let me answer this:
>>
>> > can you show that the 'referents' you speak of are not themselves just
>> relations much like an LLM uses?
>>
>> By referents, I mean the things and ideas outside of language to which
>> words point. If you hold an apple in your hand and say "this is an apple,"
>> the apple is the referent that gives your word "apple" meaning. You might
>> also say it is a "red" apple. We can say that your experience of the color
>> red exists outside of language, and that when you say the word "red," you
>> are pointing to that experience, to that particular aspect of your
>> experience of the apple.
>>
>> Now, the relations that an LLM uses are merely statistical between and
>> among symbols that in themselves have no meaning. In the massive amount of
>> text on which an LLM is trained, it will detect for example that the symbol
>> "color" often appears in certain ways near the symbol "red" and it can
>> detect many other relations with related symbols like "apple," such that it
>> can compose what are to us meaningful statements about red apples. But the
>> symbols themselves are meaningless outside of the context of their
>> referents in the real world, and the LLM has no access to those referents
>> as it is trained only on the symbols.
>>
>> Does that answer your question?
>>
>> Sorry again that I offended you.
>>
>>
>
>
> For what it's worth I don't think Gordon was intentionally trolling nor
> being passive aggressive. There's another explanation that is entirely
> innocent, that I will offer. I am not claiming it to necessarily be the
> case here, but it is worth mention anyway as it happens frequently and yet
> many people are unaware of the phenomenon.
>
> This is a phenomenon we are all subject to and which we should all be
> aware of called cognitive dissonance. It can occur whenever our brains
> encounter information perceived as threatening to our existing beliefs,
> almost like an immune system for the mind. It has the effect of creating
> blind spots which literally hide information from conscious processing.
> We'll skip over a paragraph as if it wasn't there or invent a reason to
> stop reading. It's very difficult to realize when it is happening to us but
> it happens to everyone under the right conditions.
>
> I say this only to shed some light on a common occurrence which affects
> everyone, in the hope it might explain what can happen when we discuss
> ideas that threaten beliefs that are considered fundamental to one's own
> identity. When we are aware of this phenomenon we can better understand
> when it happens to others we are talking to or even when it is happening in
> ourselves.
>
> Jason
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230405/48a7c2cd/attachment.htm>
More information about the extropy-chat
mailing list