[ExI] GPT-4 on its inability to solve the symbol grounding problem

Gordon Swobe gordon.swobe at gmail.com
Mon Apr 3 18:17:35 UTC 2023


People keep attributing these claims to me, but as below, it is ChatGPT-4
itself that says it has no true understanding of the world.

It understands (so to speak) how language models work.

-gts





On Mon, Apr 3, 2023 at 12:09 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Yes, it is very exciting to see this much discussion about
> colerness qualities, again.  In the old days, the term 'qualia' became a
> dirty word on this list, and most people would just filter out any thread
> containing that word.  Chat GPT is making it necessary that we explicitly
> understand the difference between abstract systems and phenomenal systems
> that represent knowledge with physical qualities (ok phenomenal qualities
> for those of you that think qualities are not physical,  or not
> approachable/sharable via science, apparently the majority of you?)
>
> But I disagree with your assertion that GPT4 has no idea about what it is
> talking about, and I don't think GPT4 is saying that at all in these
> answers.  Its large language model clearly has the ability to model, at
> least abstractly, a large subset of what humanity talks about in very
> powerful ways, and is on the verge of General Intelligence.  Just because
> it doesn't know what redness is like, doesn't mean it can't know more about
> redness than most humans.  Just like Black and White Marry, it can clearly
> indicate in these kinds of answers that it has a very powerful grasp of
> reality, and the difference between itself and us, and what Mary will learn
> when she experiences redness  It is only a matter of time till an abstract
> AI helps us discover which of all our descriptions of stuff in the brain is
> a description of redness, then endows itself with that so it can finally
> say: 'oh THAT is what redness is like." and thereby finally having a
> grounded understanding, making it also, phenomenally conscious like us.
>
> I so look forward to the day when we finally have an interest to do the
> same.  Finally discovering new phenomenal qualities no human has ever
> experienced before, and endowing ourselves with those.  I'm so tired of
> only knowing the qualities things seem to be.  Am I the only one that wants
> to know what all that stuff out there is really like, and what phenomenal
> uploading to phenomenal avatars will really be like in a few more years?
>
>
>
>
> On Fri, Mar 31, 2023 at 5:41 AM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> Many people here seem interested in the question of whether LLMs can have
>> conscious understanding of the words/symbols that it takes as inputs and
>> generates as outputs. It seems that most people here want to think GPT4 has
>> something like a conscious mind, but I agree with GPT4  below that it has
>> no idea what it is talking about.
>>
>>
>> On Thu, Mar 30, 2023 at 1:55 PM Gordon Swobe <gordon.swobe at gmail.com>
>> wrote:
>>
>> GPT-4 explained to me that it could not solve the symbol grounding
>>> problem for itself (I reported this in another thread). I asked it to
>>> explain the significance.
>>>
>>> My prompt: What is the significance of the fact that ChatGPT cannot
>>> solve the symbol grounding problem for itself and can only simulate that it
>>> has solved it? How does this put ChatGPT at a disadvantage compared to
>>> humans?
>>>
>>> GPT-4: The symbol grounding problem refers to the difficulty in
>>> connecting abstract symbols (such as words or concepts) to their real-world
>>> referents. While ChatGPT can simulate understanding by generating
>>> human-like text based on its training data, it doesn't possess the direct
>>> experience or sensory input that humans have to ground these symbols in
>>> reality.
>>>
>>> The significance of this limitation has several implications:
>>>
>>> Lack of true understanding: ChatGPT doesn't have a genuine understanding
>>> of the world, as it relies on patterns and associations found in the text
>>> it was trained on. It can't form new connections based on direct
>>> experience, which may result in limitations in problem-solving or abstract
>>> reasoning.
>>>
>>> Difficulty with context: Without grounding in real-world experiences,
>>> ChatGPT may struggle to fully understand context or cultural nuances. It
>>> may provide answers that are technically correct but inappropriate or
>>> irrelevant in a specific situation.
>>>
>>> Inability to learn from sensory input: Humans learn through direct
>>> experience and sensory input, which allows them to adapt and understand new
>>> situations better. ChatGPT can't update its knowledge through real-world
>>> experiences, making it less adaptable to novel contexts.
>>>
>>> Absence of common sense: Although ChatGPT can generate text that appears
>>> to show common sense understanding, it might still make errors that a human
>>> would easily avoid. This is because its responses are based on patterns in
>>> the data, rather than a true comprehension of the concepts.
>>>
>>> Limited creativity: ChatGPT's creativity is confined to the scope of its
>>> training data. While it can generate new text based on patterns it has seen
>>> before, it can't create truly novel ideas or solutions in the same way that
>>> humans, with their real-world grounding, can.
>>>
>>> In summary, the inability to solve the symbol grounding problem puts
>>> ChatGPT at a disadvantage compared to humans in terms of true
>>> understanding, adaptability, context-awareness, common sense, and
>>> creativity. However, it is essential to remember that ChatGPT is an AI
>>> language model designed to assist and augment human capabilities, not to
>>> fully replicate human cognition.
>>> ---
>>>
>>
>> -gts
>>
>>>
>>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230403/e87946aa/attachment.htm>


More information about the extropy-chat mailing list