[ExI] GPT-4 on its inability to solve the symbol grounding problem
Gordon Swobe
gordon.swobe at gmail.com
Wed Apr 12 20:52:49 UTC 2023
On Wed, Apr 12, 2023 at 11:25 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
Nagel: We may call this the subjective character of experience. It is not
>> captured by any of the familiar, recently devised reductive analyses of the
>> mental, for all of them are logically compatible with its absence.
>>
>
> This I do not agree with. This is the thinking that leads one to believe
> qualia are epiphenomenal, and inessential, which leads to zombies, and
> zombie twins, zombie earths, etc.
>
In the same paragraph, Nagel states that he does not deny that mental
states can be causal, which means he is not advancing epiphenomenalism. I
also don't see that it follows. His argument is only that subjective
experience or qualia cannot be fully reduced to or explained by objective
third party descriptions alone. Subjective experience has a first person
element that defies any third person description in the language of science
or functions or philosophy in general for that matter. This is what is
meant by the explanatory gap.
(hmm... I see now that at the end of your message, you acknowledged that
his view does not lead to epiphenomenalism.)
There is a sense in which I believe discussions about the philosophy of
mind are wastes of time. I agree with Nagel that first person
subjective experience is real and central to the question and that it
cannot be captured fully in or understood in terms of third party
descriptions. This is mostly what I mean when I say that I believe
subjectieve experience is primary and irreducible.
As I've mentioned several times when you have pressed me for answers,
the brain/mind is still a great mystery. Neuroscience is still in its
infancy. We do not know what are sometimes called the neural correlates of
consciousness, or even necessarily that such correlates exist, though I
suspect they do. This answer was not good enough for you, and you suggested
that I was dodging your questions when actually I was answering honestly
that I do know. You wanted me to suppose that the brain/mind is an
exception to the rule that understanding comes from statistical
correlations, but nobody knows how the brain comes to understand anything.
I'm much better at arguing what I believe the brain/mind cannot possibly be
than what I believe it to be, and I believe it cannot possibly be akin to a
digital computer running a large language model. Language models cannot
possibly have true understanding of the meanings of individual words or
sentences except in terms of their statistical relations to other words and
sentences the meanings of which they also cannot possibly understand. I'm
glad to see that GPT-4 "knows" how LLMs work and reports the same
conclusion.
-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230412/904fa545/attachment-0001.htm>
More information about the extropy-chat
mailing list