[ExI] Bender's Octopus (re: LLMs like ChatGPT)

Stuart LaForge avant at sollegro.com
Fri Mar 24 08:11:04 UTC 2023


Quoting Gordon Swobe <gordon.swobe at gmail.com>:

> Bender's point is not that ChatGBT is incapable of generating sensible
> sentences about sticks and bears. It is that these LLMs don't know the
> meanings of any words whatsoever. Confronted with a word it has never seen,
> it must do a statistical analysis to try to find probable next words, never
> knowing what any of them mean.


You and Bender seem to making a big deal about the referential meaning  
of words as if they were some sacred goal-keepers of consciousness.  
But really the meaning of words are quite arbitrary and determined by  
the people who use them. Thus the referential meanings of words evolve  
and change over time and come to refer to different things. Take the  
word "terrific" for example. At one time, its referent was something  
terrible cognate to horrific referring to something horrible. These  
days however, its referent is something great or wonderful.

Or how the meaning of liberal evolved in the U.S. from someone who  
acts like a free person in a free market to someone who wants big  
government, high taxes, and welfare programs. Or take for example when  
a kid on the street tells you, "Your drip is fresh." The meanings of  
words shift, change, and evolve over time and sometimes even define  
social groups. Consciousness is the ability to communicate despite the  
shifting arbitrary meaning of words, not because of the meaning of  
words having some true absolute referential meaning.

Stuart LaForge










More information about the extropy-chat mailing list