[ExI] Bender's Octopus (re: LLMs like ChatGPT)

Giovanni Santostasi gsantostasi at gmail.com
Fri Mar 24 05:37:33 UTC 2023


They are not trained only on form, or if they are trained only on form,
meaning is a DERIVED property, an emergent property. I already linked a
paper showing that ChatGPT derived THEORY OF MIND from the statistical
properties of language. It is not obvious at all this could have been
derived from statistical properties alone and it happened. The problem with
emergent properties like these that they are not easy or impossible to
predict. So the entire Bender paper is garbage because instead of being
based on some stupid philosophical argument it should be based on
experimental evidence.
1) Then the question is: Can we do an experiment using GPT-4 to see if it
understand meaning?
2) What is that experiment?
3) Can Bender suggest one?

Giovanni



On Thu, Mar 23, 2023 at 6:40 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

>
>
> On Thu, Mar 23, 2023 at 7:16 PM Giovanni Santostasi <gsantostasi at gmail.com>
> wrote:
>
>> Gordon,
>> Basically what Bender is saying is "if the training of a NLM is limited
>> then the NLM would not know what certain words mean".
>>
>
> No, that is not what she is saying, though seeing as how people are
> misunderstanding her thought experiment, I must agree the experiment is not
> as clear as it could be. She is saying, or rather reminding us, that there
> is a clear distinction to be made between form and meaning and that these
> language models are trained only on form. Here is the abstract of her
> academic paper in which she and her colleague mention the thought
> experiment.
>
> --
> Abstract: The success of the large neural language mod-els on many NLP
> tasks is exciting. However,we find that these successes sometimes lead to
> hype in which these models are being described as “understanding” language
> or capturing “meaning”. In this position paper, we argue that a system
> trained only on form has a priori no way to learn meaning. In keeping with
> the ACL 2020 theme of “Taking Stock ofWhere We’ve Been and Where We’re
> Going”,we argue that a clear understanding of the distinction between form
> and meaning will help guide the field towards better science around natural
> language understanding.
> --
> Bender is a computational linguist at the University of Washington. I
> think I read that she is actually the head of the department.
>
> the paper:
>
> https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data-gts
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/3508bbb3/attachment-0001.htm>


More information about the extropy-chat mailing list