[ExI] Bender's Octopus (re: LLMs like ChatGPT)

Gordon Swobe gordon.swobe at gmail.com
Fri Mar 24 21:19:41 UTC 2023


On Fri, Mar 24, 2023 at 12:41 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

As a computational linguist, Bender is on our side.  She is obviously very
>> excited about the progress these language models represent, but is
>> reminding us that the models do not actually understand words to mean
>> anything whatsoever.
>>
>>
>
> What's her evidence of that?
>

After all this discussion over many days, it surprises me that you would
ask that question. Perhaps you are writing for Stuart’s sake as I was
responding to  him.

Words have meanings, also called referents. These referents exist outside
of language. When you show me an apple in your hand and say “This is an
apple,” it is the apple in your hand that gives your utterance “apple”
meaning. That apple is not itself a word. It exists outside of language.

These LLM’s do no more than analyze the statistical patterns of the forms
of words in written language. They have no access to the referents and
therefore cannot know the meanings. You disagree with me on that fact,
arguing that by some magic, they can know the meanings of words outside of
language while having no access to them. To me (and to Bender and her
colleague Koller), that defies logic and reason.

-gts

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230324/cde69755/attachment.htm>


More information about the extropy-chat mailing list