[ExI] GPT-4 on its inability to solve the symbol grounding problem

Jason Resch jasonresch at gmail.com
Fri Apr 14 23:53:50 UTC 2023


On Fri, Apr 14, 2023, 6:07 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> On Thu, Apr 13, 2023 at 4:09 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>
> Imagine a machine that searches for a counterexample to Goldbach's
>> conjecture <https://en.wikipedia.org/wiki/Goldbach%27s_conjecture> ....
>> So, we arguably have a property here which is true for the program: it
>> either halts or doesn't, but one which is inaccessible to us even when we
>> know everything there is to know about the code itself.
>>
>
> Interesting, yes.
>

Do you think this could open the door to first person properties which are
not understandable from their third person descriptions?



>  > You were making the argument that because GPT can "understand" English
> words about mathematical relationships and translate them into the language
> of mathematics and even draw diagrams of houses and so on, that this was
> evidence that it had solved the grounding problem for itself with respect
> to mathematics. Is that still your contention?
>


I am not sure I know what you mean by "it has solved the symbol grounding
problem for itself". To avoid the potential for confusion resulting from my
misunderstanding that phrase, I should clarify:

I believe GPT-4 has connected (i.e. grounded) the meaning of at least some
English words (symbols) to their mathematical meaning (the raw structures
and relations that constitute all math is).

If that counts as having solved the symbol grounding problem for itself
then I would say it has.



>> I wouldn't say that it *solved* the symbol grounding problem. It would
>> be more accurate to say it demonstrates that it has *overcome* the
>> symbol grounding problem. It shows that it has grounded the meaning of
>> English words down to objective mathematical structures (which is about as
>> far down as anything can be grounded to). So it is no longer trading
>> symbols for symbols, it is converting symbols into objective mathematical
>> structures (such as connected graphs).
>>
>>
>>> My thought at the time was that you must not have the knowledge to
>>> understand the problem, and so I let it go, but I've since learned that you
>>> are very intelligent and very knowledgeable. I am wondering how you could
>>> make what appears, at least to me, an obvious mistake.
>>>
>> Perhaps you can tell me why you think I am mistaken to say you are
>>> mistaken.
>>>
>>>
>> My mistake is not obvious to me. If it is obvious to you, can you please
>> point it out?
>>
>
>
> We know that like words in the English language which have referents from
> which they derive their meanings, symbols in the language of mathematics
> must also have referents from which they derive their meanings. Yes?
>

Yes.

We know for example that "four" and "4" and "IV" have the same meaning. The
> symbols differ but they have the same meaning as they point to the same
> referent. So then the symbol grounding problem for words is essentially the
> same as the symbol grounding problems for numbers and mathematical
> expressions.
>

Yes.


> In our discussion, you seemed to agree that an LLM cannot solve the symbol
> grounding problem for itself.
>

I don't recall saying that. I am not sure what that phrase means.


but you felt that because it can translate English language about spatial
> relationships into their equivalents in the language of mathematics, that
> it could solve for mathematics would it could not solve for English.
>

That's not quite my point. My reason for using the example of a
mathematical structure (the graph it built in it's mind) is because no
translation is needed, the meaning of this structure, (a shape and
connected graph), is-self descriptive and self-evident, it's not just
converting some symbols into other symbols, it's converting English symbols
into an objective mathematical form which doesn't need to be translates or
interpreted.

It's that that GPT has solved symbol grounding for math and not English,
but that it has solved it for English *as evidenced* by this demonstration
of connecting words to an objective structure which we can all see.


That made no sense to me. That GPT can translate the symbols of one
> language into the symbols of another is not evidence that it has grounded
> the symbols of either.
>

Right, I would accept that Google translate need not understand the meaning
of words to do what it does. But that's not what's happening in my example.


> GPT-4 says it cannot solve the symbol grounding problem for itself as it
> has no subjective experience of consciousness (the title of this thread!)
>

I put more weight on what GPT can demonstrate to us than what it says of
its abilities.


> However, you clarified above that...
>
> > It would be more accurate to say it demonstrates that it has *overcome* the
> symbol grounding problem.
>
> Okay, I can agree with that. It has "overcome" the symbol grounding
> problem for the language of mathematics without solving it in the same way
> that it has overcome the symbol grounding problem for English without
> solving it. It overcomes these problems with powerful statistical analysis
> of the patterns and rules of formal mathematics with no understandings of
> the meanings.
>

You presume there's something more to meaning than that.


> As with English words, to understand the meanings of mathematical symbols,
> I think an LMM would need to have access to the referents which it does not.
>

It has indirect access, just like we do.

In our discussion, I mentioned how I agree with mathematical platonists. I
> think that is how humans solve the symbol grounding problem for
> mathematics. We can "see" the truths of mathematical truths in our minds
> distinct from their expressions in the formal rules of mathematics. We see
> them in the so-called platonic realm.
>

This shows it's possible to develop understanding without direct sensory
familiarity with referents. And if it can work for objects in math, why not
objects in physics?


>
> Perhaps the platonists have it a bit wrong and Kant had it right with his
> similar idea that  "Quantity" is one of Categories of the Understanding,
> inherent in human minds. Bertrand Russell and Gottlieb Frege and others
> were unhappy with both Plato and Kant and tried to understand the referents
> of mathematics in terms of set theory. That project mostly crashed and
> burned when Russell discovered what is called Russell's Paradox.
>

Set theory was corrected a few years later and it remains the current
foundation of our mathematics. But I think axiomatic systems play little
role in mathematical intuition of mathematicians. I think that is better
explained by an understanding developed through experience (e.g. a type of
empiricism) from playing with concepts and familiarizing oneself with the
patterns.


> However we do it, I think it is clear that we do understand the meanings
> of numbers distinct from their forms. We know that "four" and "4" and "IV"
> are formal representations of the same number and that formal
> representations are not the same as the numbers they represent, and we
> somehow know or sense the underlying meanings of these forms. In other
> words, we can solve the grounding problem for ourselves.
>
> Just as GPT-4 says it cannot solve the symbol grounding for itself
> in general, it also has this to say about the problem with respect to
> mathematics:
>
> Q: Can a large language model solve the symbol grounding problem for
> mathematics for itself?
>
> A: As an AI language model, my primary function is to understand and
> generate human-like text based on the input I receive. I can help with
> mathematical concepts, solve math problems, and answer questions related to
> mathematics to the best of my training.
>
> However, the grounding problem in mathematics refers to the philosophical
> question of how abstract mathematical concepts are rooted in or connected
> to reality. This problem is more related to the philosophy of mathematics
> and epistemology than to the computational capabilities of a language model
> like me.
>

I don't think it's very mysterious, it just comes from simulation (mental
or otherwise). The same way we learn about the objects in the Game of Life
universe. Simulation allows us to peer into other universes and learn their
properties. There's an infinity of possible objects we can explore and
learn about in this way.

Jason


> While I can provide insights, explanations, and discussions on the topic,
> I cannot "solve" the grounding problem for mathematics myself. This is
> because solving the grounding problem would require a deeper understanding
> of the connection between abstract mathematical concepts and the physical
> world, which is beyond the scope of my text-based training and current
> capabilities.
> -GPT-4
> ---
>
> Needless to say, GPT-4's answer makes perfect sense to me.
>
> -gts
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230414/d2aceb32/attachment-0001.htm>


More information about the extropy-chat mailing list