[ExI] Bender's Octopus (re: LLMs like ChatGPT)
Gordon Swobe
gordon.swobe at gmail.com
Fri Mar 24 18:19:32 UTC 2023
On Fri, Mar 24, 2023 at 2:12 AM Stuart LaForge via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> But really the meaning of words are quite arbitrary and determined by
> the people who use them. Thus the referential meanings of words evolve
> and change over time and come to refer to different things
I agree this is a reason for many human miscommunications, but the speaker
understands his words to meaning *something* and the hearer understands
those words to mean *something*.
As a computational linguist, Bender is on our side. She is obviously very
excited about the progress these language models represent, but is
reminding that the models do not actually understand words to mean anything
whatsoever.
-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230324/4419f2d1/attachment.htm>
More information about the extropy-chat
mailing list