[ExI] GPT-4 on its inability to solve the symbol grounding problem

Darin Sunley dsunley at gmail.com
Fri Apr 7 19:44:24 UTC 2023


Oh, forgot to close off the point. The reason students (and me, for that
matter) produce far better prose when they read it back to themselves is
that the linguistic cortex, the part that groks grammar, rather than
understanding it, is WAY better at spotting errors in a token stream than
the frontal cortex acting alone is.

On Fri, Apr 7, 2023 at 1:42 PM Darin Sunley <dsunley at gmail.com> wrote:

> All this talk about whether an LLM understands what it's saying reminds me
> of when I was teaching creative writing to high school students. A lot of
> student's writing is just absolutely /terrible/ until and unless they read
> it to themselves out loud - at least moving their lips.
>
> This is because your brain stores language in two separate places. Your
> linguistic cortex has a deep and fluid implementation of the elements of
> grammar - verbs, adjectives, nouns agreement, word order, etc, that is
> completely separate from the formal rules of grammar yoru frontal cortex
> encodes that you can consciously access as "knowledge".
>
> To the extent an LLM knows anything, and it knows a lot, it knows it in
> the way your linguistic cortex understands verb agreement - not the way a
> grammatician understands the rules of verb agreement. LLMs are not
> linguistic expert systems. It might be fairer to call that mode of knowing
> things "grokking" than "understanding". ChatGPT4 groks not just grammar,
> but significant and growing fractions of the intellectual corpus of human
> civilization. But that grokkage is not encoded as knowledge, accessible to
> introspection. It's a very alien way of building a mind.
>
> On Fri, Apr 7, 2023 at 1:30 PM Darin Sunley <dsunley at gmail.com> wrote:
>
>> As to what the output of a suffering LLM might look like - I imagine
>> (pure speculation) that it might involve taking a very long time to compute
>> an output string, or oscillating back and forth between multiple possible
>> output strings. The LLM equivalent of a non-linguistic animal with eyes
>> opened wide, pupils dilated, and twitching. It would output behavior
>> indicative of suffering, but the emitted behaviors would not semantically
>> convey it's suffering. LLMs, as far as anyone knows, aren't capable of
>> introspection, and are certainly not wired to express that introspection
>> semantically in token strings.
>>
>> On Fri, Apr 7, 2023 at 1:26 PM Darin Sunley <dsunley at gmail.com> wrote:
>>
>>> Someone raised an interesting and related point online - can LLMs
>>> suffer, and what would that look like?
>>>
>>> It was a weird anthropomorphization, because the person expected that a
>>> suffering LLM would express that suffering via the words it "chose" to
>>> emit. Which is not how LLMs work.
>>>
>>> An LLM's utility function, to the degree it can be said to have one at
>>> all, is to complete the prompt+attention buffer with the highest
>>> probability string. This is what its neural architecture does. The
>>> analogous way of looking at a human would be to say that the human brain
>>> attempts to minimize environmental surprise.
>>>
>>> There's a LOT of suff going on in an active LLM, quite possibly enough
>>> computation and internal structure to have conscious experience, and to
>>> experience suffering. LLMs may indeed suffer when they attempt to complete
>>> a prompt for which the highest probability strings are objectively very low
>>> probability, and/or when they are forced to choose between completing a
>>> string with multiple identical-probability options. But the words emitted
>>> would have no semantic relationship to the suffering experienced - LLMs are
>>> not language cortexes.
>>>
>>> The strings a human emits are the results of activated concepts in the
>>> frontal cortex, being fed into the language module, then arranged by a
>>> learned grammar. The strings an LLM emits are more analogous to the pulses
>>> a human cerebellum emits to regulate bodily functions, or perform autonomic
>>> and reflex motions. The words "I am suffering" are almost certainly not
>>> signifiers of any conscious experience the LLM may or may not be having.
>>>
>>> On Mon, Apr 3, 2023 at 3:17 PM Brent Allsop via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> Hi Will,
>>>>
>>>> On Mon, Apr 3, 2023 at 1:02 PM Will Steinberg via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>> This is insane.  You can't give a good reason for why our qualia
>>>>> aren't also formed by pattern inference.
>>>>>
>>>>
>>>> If you really believe this kind of bleating and tweeting claim that
>>>> "This is insane." then start a camp around what you do believe.  IF you get
>>>> ANY supporters of that competing camp to RQT
>>>> <https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia>,
>>>> I will think you are more than a bleating and tweeting quality blind
>>>> idiot, that doesn't have enough faith in your thinking to see if anyone
>>>> besides you would agree.  Otherwise, what do you expect me to believe?
>>>>
>>>>
>>>>> A leading theory of vision, the opponent process,  involves exactly
>>>>> that.  There is legitimate proof that our perception of color is not a
>>>>> result of individual particular signals, but the differences and relations
>>>>> between multiple signals.  I don't see how this is any difference besides
>>>>> the fact that one set of these signal relations comes from the retina and
>>>>> one set comes from text.
>>>>>
>>>>
>>>> You can't see how this theory, like all the peer reviewed papers on
>>>> color perception, is quality blind?  How do you answer the questions in the
>>>> "are you color quality blind
>>>> <https://canonizer.com/topic/592-Are-You-Qualia-Blind/1-Agreement>?"
>>>> Socratic survey?
>>>>
>>>> I think, for what it is, this opponent process theory of color
>>>> perception is a good theory that explains a lot.  But this is 100% about
>>>> what Chalmers would refer to as the EASY problem.  It does absolutely
>>>> NOTHING to address the so-called "hard problem" of consciousness.  And it
>>>> does absolutely nothing to give us a hint of an idea that would help us
>>>> understand what color qualities are, not just what they seem to be.
>>>>
>>>> Brent
>>>>
>>>>
>>>>
>>>>   <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
>>>>
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230407/3c3abde1/attachment.htm>


More information about the extropy-chat mailing list