[ExI] Bender's Octopus (re: LLMs like ChatGPT)

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Sat Mar 25 01:31:06 UTC 2023


On Thu, Mar 23, 2023 at 4:25 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Bender's point is not that ChatGBT is incapable of generating sensible
> sentences about sticks and bears. It is that these LLMs don't know the
> meanings of any words whatsoever.
>

### What do you mean by "they don't know"? Do you mean the LLM does not
have a representation of the general properties of bears, sticks, and other
physical objects, does not have a
representation of the network of possible interactions between them (which
in the psychological literature is referred to as "folk physics")? That it
does not have a network of connections between abstract
representations of physical properties and invariant descriptions of
classes of objects (i.e. they don't have the ability to recognize images of
objects, classify the objects and link the objects to various properties)?

GPT4 clearly has the above mental affordances. In other words, it knows the
meaning of a bear, including its possible effects on human survival
and the extent of its vulnerability to sticks.
 It can look at a picture of a bear, extrapolate what is going to happen
during a bear attack, formulate a plan of defense using available resources
and
output a coherent narrative.

In what way is this process not understanding what a bear is?

Or maybe I should ask, what does it mean for a human to *understand*, in
mechanistic, psychophysiological terms?

Do tell us how you understand the word "understand", using the language of
basic neuroscience, so we can understand what we are discussing here.

Rafal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230324/116d379a/attachment.htm>


More information about the extropy-chat mailing list