[ExI] GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Sat Apr 15 20:22:08 UTC 2023


On Fri, Apr 14, 2023 at 6:01 PM Jason Resch via extropy-chat <extropy-chat at lists.extropy.org> wrote:

>>>
>>> Imagine a machine that searches for a counterexample to Goldbach's conjecture .... So, we arguably have a property here which is true for the program: it either halts or doesn't, but one which is inaccessible to us even when we know everything there is to know about the code itself.
>>
>>
>> Interesting, yes.
>
>
> Do you think this could open the door to first person properties which are not understandable from their third person descriptions?


Not sure what you mean by "open the door," but my answer here is the same as for the paper you cited. I have no problem with the idea that we can create objective models of the mind that show how some properties are private or inaccessible. Psychologists have been doing it for centuries. The models all still fail to overcome this explanatory gap to which Nagel and I refer.  There are facts of the world that exist only from a particular point of view and thus cannot be captured in objective language which by definition can only describe the world from no particular point of view.


>> However, you clarified above that...
>>
>> > It would be more accurate to say it demonstrates that it has overcome the symbol grounding problem.
>>
>> Okay, I can agree with that. It has "overcome" the symbol grounding problem for the language of mathematics without solving it in the same way that it has overcome the symbol grounding problem for English without solving it. It overcomes these problems with powerful statistical analysis of the patterns and rules of formal mathematics with no understandings of the meanings.
>
>
> You presume there's something more to meaning than that

Of course there is to more to meaning than understanding how meaningless symbols relate statistically and grammatically to other meaningless symbols! That is why I bring up this subject of the symbol grounding problem in philosophy. It is only in the grounding of symbols that we can know their meanings. This requires insight into the world outside of language and symbols. Otherwise, with respect to mathematical symbols, we are merely carrying out the formal operations of mathematics with no understanding, which is exactly what I believe GPT-4 does and can only do.

GPT-4 agrees, but it is not that I look to GPT-4 as the authority. I look to my own understanding of language models as the authority and I am relieved to see that I needn’t argue that GPT-4 is stating falsehoods as I was expecting when I first entered these discussions some weeks ago.

I wonder why anyone feels it necessary to ascribe consciousness to language models in the first place. Outside of indulging our sci-fi fantasies, what purpose does this silly anthropomorphism serve? By Occam’s Razor, we should dismiss the idea as nonsense.


-gts




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230415/48ae7736/attachment.htm>


More information about the extropy-chat mailing list