[ExI] Symbol Grounding

Darin Sunley dsunley at gmail.com
Fri Apr 28 18:36:27 UTC 2023


The thing that gets missed so often here, is that gpt4 isn't just mapping
arbitrary token strings to arbitrary token strings. It's mapping human
generated language strings to human generated language strings.

Those human generated language strings, A ) are a tiny, infinitesimal
subset of the space of all token strings, and B ) have a /lot/ of internal
structure, simply by virtue of being human generated language. These
strings were generated by systems, which is to say people, whose symbols
/do/ ground in a larger consistent system, the universe.

Earlier philosophers who despaired of the symbol grounding problem being
solvable were not, I think, imagining the /vast/ amount of written text and
images of the real world that large language models have access to.

It turns out that a Boltzmann brain floating in interstellar space, if it
reads enough about Earth, can have a very deep understanding of how Earth
works, without having directly experienced it. The fact that the training
data grounds in reality caused the learned model to inherit that grounding,
in a way that a similarly complex model trained on arbitrary token strings
would not

On Fri, Apr 28, 2023, 12:03 PM Darin Sunley <dsunley at gmail.com> wrote:

> GPT-4 is architecturally incapable of introspection. It has a lot of
> knowledge on a wide variety of subjects, but the only things it knows about
> itself is what has been written in its training material. It cannot
> perceive its own decision making process directly, even to the extremely
> limited extent humans can. I am therefore not terribly interested in what
> it has to say about its own cognition and consciousness. (Beyond, of
> course, being endlessly fascinated, entertained, and mildly future shocked
> that it /can/ execute discourse on the nature of consciousness at all. Like
> Scott Aaronson said, how is it people can stop being fascinated and amazed
> at LLMs long enough to get angry with each other over them?) It's
> statements about itself are fascinating, but not compelling.
>
> Moreover, I have no idea what your "understanding meaning" even is. If
> being able to give precise and correct definitions of every word it uses,
> and using those words precisely correctly in the context of every other
> word it knows, coining new words, providing operational definitions of
> those new words, and again using them flawlessly in arbitrary contexts
> isn't "understanding", than not only do I not know what "understanding" is,
> but I'm not certain I care. The thing demonstrates every characteristic of
> reading comprehension we assess school students for.
>
> One might just as well say an electronic circuit doesn't /really/ do
> arithmetic. That's as may be, but then I'm pretty sure I then don't
> actually care about "real arithmetic." As with arithmetic, a flawless
> imitation of understanding /is/ understanding. Why should I care about a
> distinction that appears to have no difference?
>
> On Fri, Apr 28, 2023 at 11:35 AM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> On Wed, Apr 26, 2023 at 1:10 PM Ben Zaiboc via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>>
>>> On 26/04/2023 18:32, extropy-chat-request at lists.extropy.org wrote:
>>>
>>> On Wed, Apr 26, 2023 at 10:58 AM Ben Zaiboc via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> I wrote to you that in my opinion you were conflating linguistics and
>>>> neuroscience.
>>>>
>>>> Actually, you went further than that, arguing that linguistics is not
>>>> even the correct discipline.  But you were supposedly refuting my recent
>>>> argument which is entirely about what linguistics — the science of language
>>>> — can inform us about language models.
>>>>
>>>> -gts
>>>>
>>>>
>>>>
>>>> Yes, prior to my question. Which has a point. But you are still dodging
>>>> it.
>>>>
>>>
>>> I simply have no interest in it.
>>>
>>>
>>> OK, then. That clears that up. You have no interest in even listening to
>>> someone else's argument, much less engaging with it. I get it.
>>>
>>
>> I explained that while your theory of spike trails in the brain and so on
>> is interesting, it tells me nothing about how a digital computer with no
>> brain and no nervous system and no sense organs or sensory apparatus
>> whatsoever can understand the meanings of words merely from analyzing how
>> they appear in relation to one another statistically in the corpus.
>>
>> The reality as I see it and *as GPT-4 itself explains it *is that it
>> does not truly understand the meanings of words. We all find that amazing
>> and difficult to believe as the words appear meaningful to us and sometimes
>> even profoundly meaningful, but we as the end-users of this technology are
>> the ones finding/assigning the meanings to the words. GPT-4 is merely
>> generating symbols that it has a high degree of confidence will have
>> meaning to us.
>>
>> -gts
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230428/df1cbadb/attachment.htm>


More information about the extropy-chat mailing list