[ExI] GPT-4 on its inability to solve the symbol grounding problem

Jason Resch jasonresch at gmail.com
Wed Apr 12 21:45:09 UTC 2023


On Wed, Apr 12, 2023, 4:53 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> On Wed, Apr 12, 2023 at 11:25 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> Nagel: We may call this the subjective character of experience. It is not
>>> captured by any of the familiar, recently devised reductive analyses of the
>>> mental, for all of them are logically compatible with its absence.
>>>
>>
>> This I do not agree with. This is the thinking that leads one to believe
>> qualia are epiphenomenal, and inessential, which leads to zombies, and
>> zombie twins, zombie earths, etc.
>>
>
> In the same paragraph, Nagel states that he does not deny that mental
> states can be causal, which means he is not advancing epiphenomenalism.
>

I misread Nagel there. He was critiquing rather than endorsing
epiphenomenalism there.

I also don't see that it follows. His argument is only that subjective
> experience or qualia cannot be fully reduced to or explained by objective
> third party descriptions alone. Subjective experience has a first person
> element that defies any third person description in the language of science
> or functions or philosophy in general for that matter. This is what is
> meant by the explanatory gap.
>
> (hmm... I see now that at the end of your message, you acknowledged that
> his view does not lead to epiphenomenalism.)
>
> There is a sense in which I believe discussions about the philosophy of
> mind are wastes of time.
>

There is a lot we can learn about consciousness even if we can't share our
qualia. It's in a sense, the most important question, as everything we care
about is ultimately states of consciousness.


I agree with Nagel that first person subjective experience is real and
> central to the question and that it cannot be captured fully in or
> understood in terms of third party descriptions. This is mostly what I mean
> when I say that I believe subjectieve experience is primary and irreducible.
>

I can't say I disagree with this. I don't know how far you got in my
existence article, but it reaches the conclusion that conscious is primary.


> As I've mentioned several times when you have pressed me for answers,
> the brain/mind is still a great mystery. Neuroscience is still in its
> infancy. We do not know what are sometimes called the neural correlates of
> consciousness, or even necessarily that such correlates exist, though I
> suspect they do. This answer was not good enough for you, and you suggested
> that I was dodging your questions when actually I was answering honestly
> that I do know.
>

The questions I ask are important as answering them leads to either better
clarification of your position, or should lead to a better understanding of
the issue at hand. If you don't know or would prefer not to answer that is
fine. But often you simply skip a question I ask without giving an
indication you saw it, which leads me to ask it again. E.g. the partial
neural substitution question which I have asked a few times now.


You wanted me to suppose that the brain/mind is an exception to the rule
> that understanding comes from statistical correlations, but nobody knows
> how the brain comes to understand anything.
>

Our best understanding is that it's related to neurons and what they do. We
have realistic models of what neurons do and how they do it. If you think
there's something more that is necessary to human consciousness but can't
say what it is, that would satisfy me for an answer.

If we are operating from different premises we are sure to come to
different, irreconcilable conclusions.

Let's see if when we agree on a premise that we can reach the same
conclusion:
If we assume there's not something critical which we have yet to discover
about the brain and neurons, would you agree that the inputs to the brain
from the external world are ultimately just nerve firings from the senses,
and from the brain's point of view, the only information it has access to
is the timings of which nerves fire when? If you agree so far, then would
you agree the only thing the brain could use as a basis of learning about
the external world are the correlations and patterns among the firing
nerves?


> I'm much better at arguing what I believe the brain/mind cannot possibly
> be than what I believe it to be, and I believe it cannot possibly be
> akin to a digital computer running a large language model.
>

I agree the human brain is not akin to a LLM.

But this is separate from the propositions you also have disagreed with:
1. That a digital computer (or LLM) can have understanding.
2. That a digital computer (or LLM) can be conscious.


Language models cannot possibly have true understanding of the meanings of
> individual words or sentences except in terms of their statistical
> relations to other words and sentences the meanings of which they also
> cannot possibly understand.
>

I give the LLM some instructions. It follows them. I concluded from this
the LLM understood my instructions. You conclude it did not.

I must wonder: what definition of "understand" could you possibly be using
that is consistent with the above paragraph?

I'm glad to see that GPT-4 "knows" how LLMs work and reports the same
> conclusion.
>

In the past you agreed we can't take it at its word. Have you changed your
mind on this?

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230412/38eaee70/attachment-0001.htm>


More information about the extropy-chat mailing list