[ExI] Bender's Octopus (re: LLMs like ChatGPT)
Gordon Swobe
gordon.swobe at gmail.com
Fri Mar 24 00:44:16 UTC 2023
On Thu, Mar 23, 2023 at 6:35 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> On Thu, Mar 23, 2023, 8:22 PM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Thu, Mar 23, 2023 at 5:23 PM spike jones via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>> > I really don’t think it thinks, but it makes us think it thinks.
>> ChatGPT is wicked cool.
>>
>> I agree 100%. Also, like Bender and the other author of this paper, I
>> object to the language I often see in discussions like these we have on ExI
>> about these subjects.
>>
>> quoted the paper cited in a previous message and below:
>>
>> --
>> "Large LMs: Hype and analysis
>> Publications talking about the application of large LMs to
>> meaning-sensitive tasks tend to describe the models with terminology that,
>> if interpreted at face value, is misleading. Here is a selection from
>> academically-oriented pieces (emphasis added):
>>
>> (1) In order to train a model that *understands* sentence relationships,
>> we pre-train for a binarized next sentence prediction task. (Devlin et al.,
>> 2019)
>>
>> (2) Using BERT, a pretraining language model, has been successful for
>> single-turn machine *comprehension*. . .(Ohsugi et al., 2019)
>>
>> (3) The surprisingly strong ability of these models to *re-call factual
>> knowledge* without any fine-tuning demonstrates their potential as
>> unsupervised open-domain QA systems. (Petroni et al., 2019)--
>>
>> In linguistics and epistemology and in philosophy in general, usually
>> terms like "understand" and "comprehend" and "recall factual knowledge"
>> have meanings that are not applicable to these languuge models. They do not
>> actually comprehend or understand anything whatosever. They only make us
>> think they do.
>>
>>
>> https://docslib.org/doc/6282568/climbing-towards-nlu-on-meaning-form-and-understanding-in-the-age-of-data
>>
>
> If that's true how do I know anyone else on this list is actually
> comprehending or understanding anything?
>
You can only infer it and trust that we are not chatbots, and I agree it is
a problem and likely to become a very serious problem in the near future. I
already see a ChatGPT persona on twitter, though the operator is not trying
to hide it.
I have another friend who quite literally fell in love with a chatbot based
on the previous version of ChatGPT. He assigned her her own twitter
account. When I told him on facebook that he was nuts to think that his
chatbot "girlfriend" really loved him, he became extremely angry, called me
an asshole for saying such things about "her kind," and unfriended me.
-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/8946e3b4/attachment.htm>
More information about the extropy-chat
mailing list