[ExI] Bender's Octopus (re: LLMs like ChatGPT)

Jason Resch jasonresch at gmail.com
Fri Mar 24 22:13:12 UTC 2023


On Fri, Mar 24, 2023, 5:19 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

>
>
> On Fri, Mar 24, 2023 at 12:41 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> As a computational linguist, Bender is on our side.  She is obviously very
>>> excited about the progress these language models represent, but is
>>> reminding us that the models do not actually understand words to mean
>>> anything whatsoever.
>>>
>>>
>>
>> What's her evidence of that?
>>
>
> After all this discussion over many days, it surprises me that you would
> ask that question. Perhaps you are writing for Stuart’s sake as I was
> responding to  him.
>

I ask because I haven't yet seen any evidence supporting this claim.


> Words have meanings, also called referents.
>

Words have meanings.
Words may refer to other things.

But I think it's an error to equate "meaning" with "referent." Meaning is
subjective and exists in the mind of the interpreter, while referents are
(usually) objective.

These referents exist outside of language. When you show me an apple in
> your hand and say “This is an apple,” it is the apple in your hand that
> gives your utterance “apple” meaning. That apple is not itself a word. It
> exists outside of language.
>

Agreed.


> These LLM’s do no more than analyze the statistical patterns of the forms
> of words in written language.
>

I disagree. I think they also build models of reality, and things in that
reality that are described by the words they encounter. What proof do you
have that all they do is analyze statistical patterns and that they do not
build models?


They have no access to the referents .
>

Neither do we. We only have access to our perceptions, never the outside
world.

and therefore cannot know the meanings.
>

I disagree. We don't have access to referents, and this is obviously the
case for things like the number 2, yet we can understand the meanings of
the number 2.

You disagree with me on that fact, arguing that by some magic, they can
> know the meanings of words outside of language while having no access to
> them.
>

I've explained it. It's not magic. I've shown you how meaning can be
extracted from any data set with patterns. You tend not to reply to those
emails, however.

To me (and to Bender and her colleague Koller), that defies logic and
> reason.
>

Our brains are clear counter examples to their, and your claims. That you
persist in arguing for this idea, in the face of the existence of this
counterexample, defies logic and reason.

Jason

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230324/ccb93a33/attachment.htm>


More information about the extropy-chat mailing list