[ExI] GPT-4 on its inability to solve the symbol grounding problem

Jason Resch jasonresch at gmail.com
Thu Apr 6 13:26:36 UTC 2023

On Thu, Apr 6, 2023, 8:56 AM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> On Thu, Apr 6, 2023 at 6:16 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> This is a crucial point though. It means that meaning can exist entirely
>> within the confines and structure of a mind, independent of the universe
>> outside it.
> I certainly have never questioned that.

Okay this indicates I had an incorrect understanding of how you saw things.
I had previously believed you thought there had to be some kind of direct
link and access between referents in the real world and the mind that
understood the meanings of the words that refer to the referents. If you do
not think this, then we're on the same page.

> If direct access to the universe is necessary, do you reject the
>> possibility of the ancient "dream argument?" (the idea that we can never
>> know if our perceptions match reality)?
> Not sure why you ask about “direct access to the universe.” You have
> referents in your dreams just as you do in waking life. If you see a pink
> unicorn in your dream, it is not real, but you know both in your dream and
> after you awaken what you mean when you say you saw a pink unicorn.


> How is it the brain derives meaning when all it receives are nerves
>> signals? Even if you do not know, can you at least admit it stands as a
>> counterexample as its existence proves that at least somethings (brains)
>> *can* derive understanding from the mere statistical correlations of their
>> inputs?
> I have answered this. We don’t know how the brain does it, but we do know
> that form is not meaning, i.e., that the form of a word does not contain
> its meaning.

We agree on this. Meaning is not in the form of any one word. But might it
exists in the structures, patterns, and relations of words?

GPT-4 knows this also. It will tell you that it does not know the meanings
> of individual words as it has no conscious experience. It knows only how to
> assemble words together in patterns that the users find meaningful.

For the ~sixth time now, you have ignored my question:

"can you at least admit it [the brain] stands as a counterexample as its
existence proves that at least some things (brains) *can* derive
understanding from the mere statistical correlations of their inputs?"

Note: I will continue to include this question in my replies to you and
keep incrementing it's counter until you acknowledge and respond to it.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230406/6244bf0d/attachment.htm>

More information about the extropy-chat mailing list