[ExI] GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Thu Apr 13 07:13:01 UTC 2023


On Wed, Apr 12, 2023 at 3:54 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:


> Let's see if when we agree on a premise that we can reach the same
> conclusion:
>

If we assume there's not something critical which we have yet to discover
> about the brain and neurons, would you agree that the inputs to the brain
> from the external world are ultimately just nerve firings from the senses,
> and from the brain's point of view, the only information it has access to
> is the timings of which nerves fire when? If you agree so far, then would
> you agree the only thing the brain could use as a basis of learning about
> the external world are the correlations and patterns among the firing…
>

I’m not sure about “correlations and patterns,” but yes only if I
reluctantly make that assumption that there is nothing more to the brain
and mind then I can agree. Recall that you already asked me this question
and I replied that I am not a strict empiricist.

Also, even if that is all there is to the brain and mind, in my view and in
agreement with Nagel, no *objective* description of these neural processes
in the language of science or computation can capture the facts of
conscious experience which exist not objectively, but only from a
particular point of view.

You might want to argue that my position here leads to dualism, but that is
not necessarily the case. The dualist asserts a kind of immaterial
mind-substance that exists separate from the material, but that supposed
mind-substance is thought to exist objectively. The dualist makes the same
mistake as the physicalist.

I agree the human brain is not akin to a LLM.
>
> But this is separate from the propositions you also have disagreed with:
> 1. That a digital computer (or LLM) can have understanding.
> 2. That a digital computer (or LLM) can be conscious.
>

Yes, and it is best to combine the two. I disagree that LLMs have conscious
understanding.


I give the LLM some instructions. It follows them. I concluded from this
> the LLM understood my instructions. You conclude it did not.
>
> I must wonder: what definition of "understand" could you possibly be using
> that is consistent with the above paragraph?
>

As above, LLMs have no *conscious* understanding, and this LLM called GPT-4
agrees.

As I’ve written, the sort of unconscious understanding to which you refer
is trivial and uninteresting. My smart doorbell “understands” when there is
motion outside my door. I am not impressed.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230413/017eb38f/attachment.htm>


More information about the extropy-chat mailing list