[ExI] Bender's Octopus (re: LLMs like ChatGPT)

Gordon Swobe gordon.swobe at gmail.com
Fri Mar 24 00:20:10 UTC 2023


On Thu, Mar 23, 2023 at 5:23 PM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I really don’t think it thinks, but it makes us think it thinks.  ChatGPT
is wicked cool.

I agree 100%. Also, like Bender and the other author of this paper, I
object to the language I often see in discussions like these we have on ExI
about these subjects.

quoted the paper cited in a previous message and below:

--
"Large LMs: Hype and analysis
Publications talking about the application of large LMs to
meaning-sensitive tasks tend to describe the models with terminology that,
if interpreted at face value, is misleading. Here is a selection from
academically-oriented pieces (emphasis added):

(1) In order to train a model that *understands* sentence relationships, we
pre-train for a binarized next sentence prediction task. (Devlin et al.,
2019)

(2) Using BERT, a pretraining language model, has been successful for
single-turn machine *comprehension*. . .(Ohsugi et al., 2019)

(3) The surprisingly strong ability of these models to *re-call factual
knowledge* without any fine-tuning demonstrates their potential as
unsupervised open-domain QA systems. (Petroni et al., 2019)--

In linguistics and epistemology and in philosophy in general, usually terms
like "understand" and "comprehend" and "recall factual knowledge" have
meanings that are not applicable to these languuge models. They do not
actually comprehend or understand anything whatosever. They only make us
think they do.


https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/42786924/attachment.htm>


More information about the extropy-chat mailing list