[ExI] GPT-4 on its inability to solve the symbol grounding problem

Jason Resch jasonresch at gmail.com
Sat Apr 15 21:12:45 UTC 2023


On Sat, Apr 15, 2023, 4:22 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> On Fri, Apr 14, 2023 at 6:01 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> >>>
> >>> Imagine a machine that searches for a counterexample to Goldbach's
> conjecture .... So, we arguably have a property here which is true for the
> program: it either halts or doesn't, but one which is inaccessible to us
> even when we know everything there is to know about the code itself.
> >>
> >>
> >> Interesting, yes.
> >
> >
> > Do you think this could open the door to first person properties which
> are not understandable from their third person descriptions?
>
>
> Not sure what you mean by "open the door," but my answer here is the same
> as for the paper you cited. I have no problem with the idea that we can
> create objective models of the mind that show how some properties are
> private or inaccessible. Psychologists have been doing it for centuries.
> The models all still fail to overcome this explanatory gap to which Nagel
> and I refer.  There are facts of the world that exist only from a
> particular point of view and thus cannot be captured in objective language
> which by definition can only describe the world from no particular point of
> view.
>
I agree. In a way, all there are, are first person views. It's amazing
anything is communicable at all.



>
> >> However, you clarified above that...
> >>
> >> > It would be more accurate to say it demonstrates that it has overcome
> the symbol grounding problem.
> >>
> >> Okay, I can agree with that. It has "overcome" the symbol grounding
> problem for the language of mathematics without solving it in the same way
> that it has overcome the symbol grounding problem for English without
> solving it. It overcomes these problems with powerful statistical analysis
> of the patterns and rules of formal mathematics with no understandings of
> the meanings.
> >
> >
> > You presume there's something more to meaning than that
>
> Of course there is to more to meaning than understanding how meaningless
> symbols relate statistically and grammatically to other meaningless symbols!
>
It's not obvious to me that understanding requires more than simply
"analysis of patterns". Patterns are all our brains receive from the world
after all.


That is why I bring up this subject of the symbol grounding problem in
> philosophy. It is only in the grounding of symbols that we can know their
> meanings. This requires insight into the world outside of language and
> symbols.
>
Sensory input from the outside world is just patterns. Why should patterns
of activation in cells of the retina allow for an understanding to develop,
but not patterns of symbols in a corpus of text? The best I've gotten from
you is that we don't know how the brain works, but this doesn't help
convince me your view. Regardless of how the brain does it, we agree it
does it. This indicates it is possible to develop understanding from an
analysis of patterns. Therefore there must be some error in your reasoning
or assumptions that lead you to conclude this is impossible.


Otherwise, with respect to mathematical symbols, we are merely carrying out
> the formal operations of mathematics with no understanding, which is
> exactly what I believe GPT-4 does and can only do.
>
> GPT-4 agrees, but it is not that I look to GPT-4 as the authority. I look
> to my own understanding of language models as the authority and I am
> relieved to see that I needn’t argue that GPT-4 is stating falsehoods as I
> was expecting when I first entered these discussions some weeks ago.
>
> I wonder why anyone feels it necessary to ascribe consciousness to
> language models in the first place.
>

Why do we feel it important to ascribe conscious to other humans or to
animals?

Outside of indulging our sci-fi fantasies, what purpose does this silly
> anthropomorphism serve? By Occam’s Razor, we should dismiss the idea as
> nonsense.
>
Occam's razor is about minimizing assumptions and complexity of theories.

For example, compare these two theories:
1. consciousness supervenes on any information processing system.
2. consciousness supervenes on any information processing system that uses
living cells as it's computational substrate.

Regardless of whether either of these is true, one theory is simpler, as it
doesn't introduce exceptions or complications that aren't necessary to fit
the facts and observations.

Jason

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230415/8afce91e/attachment.htm>


More information about the extropy-chat mailing list