[ExI] Bender's Octopus (re: LLMs like ChatGPT)

Jason Resch jasonresch at gmail.com
Fri Mar 24 18:39:01 UTC 2023


On Fri, Mar 24, 2023 at 1:21 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Fri, Mar 24, 2023 at 2:12 AM Stuart LaForge via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>
>> But really the meaning of words are quite arbitrary and determined by
>> the people who use them. Thus the referential meanings of words evolve
>> and change over time and come to refer to different things
>
>
> I agree this is a reason for many human miscommunications, but the speaker
> understands his words to meaning *something* and the hearer understands
> those words to mean *something*.
>
> As a computational linguist, Bender is on our side.  She is obviously very
> excited about the progress these language models represent, but is
> reminding that the models do not actually understand words to mean anything
> whatsoever.
>
>

What's her evidence of that?

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230324/ba27605b/attachment-0001.htm>


More information about the extropy-chat mailing list