[ExI] Symbol Grounding

Gordon Swobe gordon.swobe at gmail.com
Fri Apr 28 18:43:56 UTC 2023


> GPT-4 is architecturally incapable of introspection

Early in my investigation/interrogation of GPT-4, I realized that I should
not ask it to introspect and started asking it about language models in
general, a subject about which it clearly has a great deal of knowledge, no
different from any other subject on which it has been trained. I write to
it is as if it were a professor of language models.

Consider a simple example, the sentence "The apple fell from the ____."

Language models know with a high degree of confidence that the next symbols
in the sentence are "t","r","e","e" and will, like any autocomplete
function, complete the sentence as such with no knowledge of what is a
tree. i.e., with no knowledge of the meaning of "tree."

While similar to auto-completes, obviously language models are more
powerful than simple auto-complete functions. The apple might instead have
fallen from someone's hand, but GPT looks also at the context of the
sentence to decide if it was a hand or a tree. If the previous sentence was
about someone holding an apple in his hand, it is smart enough to know it
fell from the hand. This ability to "understand" words in context is quite
amazing, but it is still only looking at symbols.

But back to your main point, I do not believe that we should trust GPT-4's
understanding of large language models any less than we should trust its
understanding of any other subject on which it is trained. This training on
the literature about AI and LLMs is probably one of distinguishing features
of state-of-the-art GPT-4 in comparison to less evolved LLMs. I expect that
over the next few months, all major language models will give similar
answers to these questions about the inner workings of language models,
consistent with the literature on language models.

-gts







On Fri, Apr 28, 2023 at 12:06 PM Darin Sunley via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> GPT-4 is architecturally incapable of introspection. It has a lot of
> knowledge on a wide variety of subjects, but the only things it knows about
> itself is what has been written in its training material. It cannot
> perceive its own decision making process directly, even to the extremely
> limited extent humans can. I am therefore not terribly interested in what
> it has to say about its own cognition and consciousness. (Beyond, of
> course, being endlessly fascinated, entertained, and mildly future shocked
> that it /can/ execute discourse on the nature of consciousness at all. Like
> Scott Aaronson said, how is it people can stop being fascinated and amazed
> at LLMs long enough to get angry with each other over them?) It's
> statements about itself are fascinating, but not compelling.
>
> Moreover, I have no idea what your "understanding meaning" even is. If
> being able to give precise and correct definitions of every word it uses,
> and using those words precisely correctly in the context of every other
> word it knows, coining new words, providing operational definitions of
> those new words, and again using them flawlessly in arbitrary contexts
> isn't "understanding", than not only do I not know what "understanding" is,
> but I'm not certain I care. The thing demonstrates every characteristic of
> reading comprehension we assess school students for.
>
> One might just as well say an electronic circuit doesn't /really/ do
> arithmetic. That's as may be, but then I'm pretty sure I then don't
> actually care about "real arithmetic." As with arithmetic, a flawless
> imitation of understanding /is/ understanding. Why should I care about a
> distinction that appears to have no difference?
>
> On Fri, Apr 28, 2023 at 11:35 AM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> On Wed, Apr 26, 2023 at 1:10 PM Ben Zaiboc via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>>
>>> On 26/04/2023 18:32, extropy-chat-request at lists.extropy.org wrote:
>>>
>>> On Wed, Apr 26, 2023 at 10:58 AM Ben Zaiboc via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> I wrote to you that in my opinion you were conflating linguistics and
>>>> neuroscience.
>>>>
>>>> Actually, you went further than that, arguing that linguistics is not
>>>> even the correct discipline.  But you were supposedly refuting my recent
>>>> argument which is entirely about what linguistics — the science of language
>>>> — can inform us about language models.
>>>>
>>>> -gts
>>>>
>>>>
>>>>
>>>> Yes, prior to my question. Which has a point. But you are still dodging
>>>> it.
>>>>
>>>
>>> I simply have no interest in it.
>>>
>>>
>>> OK, then. That clears that up. You have no interest in even listening to
>>> someone else's argument, much less engaging with it. I get it.
>>>
>>
>> I explained that while your theory of spike trails in the brain and so on
>> is interesting, it tells me nothing about how a digital computer with no
>> brain and no nervous system and no sense organs or sensory apparatus
>> whatsoever can understand the meanings of words merely from analyzing how
>> they appear in relation to one another statistically in the corpus.
>>
>> The reality as I see it and *as GPT-4 itself explains it *is that it
>> does not truly understand the meanings of words. We all find that amazing
>> and difficult to believe as the words appear meaningful to us and sometimes
>> even profoundly meaningful, but we as the end-users of this technology are
>> the ones finding/assigning the meanings to the words. GPT-4 is merely
>> generating symbols that it has a high degree of confidence will have
>> meaning to us.
>>
>> -gts
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230428/60d51ac4/attachment.htm>


More information about the extropy-chat mailing list