[ExI] GPT-4 on its inability to solve the symbol grounding problem
Jason Resch
jasonresch at gmail.com
Thu Apr 6 12:14:08 UTC 2023
On Thu, Apr 6, 2023, 1:49 AM Gordon Swobe <gordon.swobe at gmail.com> wrote:
> On Wed, Apr 5, 2023 at 9:50 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> "In the absence of the things they mean, they have no meaning" -- This I
>> disagree with. If two English speakers survived while the rest of the
>> universe disappeared completely, the two speakers could still carry on a
>> meaningful conversation. Their words would still mean things to them.
>>
>
> I'm sorry but that is not the point. My statement was merely a casual way
> of saying that words have referents, that those referents give them
> meaning, and that without those referents they are meaningless. The English
> speakers in your example have referents for their words in their minds and
> memories.
>
This is a crucial point though. It means that meaning can exist entirely
within the confines and structure of a mind, independent of the universe
outside it.
> Giovanni apparently does not like or understand the concept. I think the
> former, as it is integral to the argument that LLMs have no access to the
> meanings of words in the texts on which they are trained.
>
If direct access to the universe is necessary, do you reject the
possibility of the ancient "dream argument?" (the idea that we can never
know if our perceptions match reality)? This idea has spawned many others,
the butterfly dream, Descartes's evil demon, Boltzmann brains, brains in a
vats, the simulation hypothesis, etc. Common to all these scenarios is the
observation that a mind only knows the information it has/is given, and so
there's no guarantee that information matches the true reality containing
the mind. Do you see how this implication relates to our discussion of LLMs?
>
> Unlike the English speakers in your example, an LLM has no access to the
> referents for the words on which it is trained. It can do no more than
> analyze the statistical relationships and patterns between and among them
> and make predictions about future words and patterns, which by the way is
> *exactly what GPT-4 says it does." GPT-4 says I am quite accurate to call
> it an unconscious, highly sophisticated autocomplete feature similar to but
> more powerful that what is found in any word processing application.
>
I have asked you around five times now this question, and in every case you
ignore it and fail to respond. This is why I suspected cognitive dissonance
might be at play, but I will risk asking it again:
How is it the brain derives meaning when all it receives are nerves
signals? Even if you do not know, can you at least admit it stands as a
counterexample as its existence proves that at least somethings (brains)
*can* derive understanding from the mere statistical correlations of their
inputs?
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230406/d87c3f7b/attachment.htm>
More information about the extropy-chat
mailing list