[ExI] Bender's Octopus (re: LLMs like ChatGPT)

Will Steinberg steinberg.will at gmail.com
Fri Mar 24 02:36:46 UTC 2023


I don't have a lot of faith in a person who has a hypothesis and designs a
thought experiment that is essentially completely irrelevant to the
hypothesis.  The only connection is some tenuous metaphor stuff, but the
thought experiment fails because the answer is obvious: like I said
earlier, and others have said, the octopus simply didn't have access to the
information.  If the author wanted to prove their actual hypothesis, maybe
they should have designed a thought experiment that was related to it.
That makes me think all they had was a hunch, and designed a bad thought
experiment around it.  It's even worse than the awful Chinese Room
experiment you spoke on ten years ago.

Like I mentioned, not having access to the actual referents doesn't even
mean a learning entity cannot know them.  You likely haven't experienced
MOST things you know.  You know them because of the experience of others,
just like the AI might.

I'm open to your argument in some ways, but you have done a poor job or
defending it.

On Thu, Mar 23, 2023, 9:45 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Thu, Mar 23, 2023 at 7:16 PM Giovanni Santostasi <gsantostasi at gmail.com>
> wrote:
>
>> Gordon,
>> Basically what Bender is saying is "if the training of a NLM is limited
>> then the NLM would not know what certain words mean".
>>
>
> No, that is not what she is saying, though seeing as how people are
> misunderstanding her thought experiment, I must agree the experiment is not
> as clear as it could be. She is saying, or rather reminding us, that there
> is a clear distinction to be made between form and meaning and that these
> language models are trained only on form. Here is the abstract of her
> academic paper in which she and her colleague mention the thought
> experiment.
>
> --
> Abstract: The success of the large neural language mod-els on many NLP
> tasks is exciting. However,we find that these successes sometimes lead to
> hype in which these models are being described as “understanding” language
> or capturing “meaning”. In this position paper, we argue that a system
> trained only on form has a priori no way to learn meaning. In keeping with
> the ACL 2020 theme of “Taking Stock ofWhere We’ve Been and Where We’re
> Going”,we argue that a clear understanding of the distinction between form
> and meaning will help guide the field towards better science around natural
> language understanding.
> --
> Bender is a computational linguist at the University of Washington. I
> think I read that she is actually the head of the department.
>
> the paper:
>
> https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data-gts
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/3fa41336/attachment.htm>


More information about the extropy-chat mailing list