[ExI] Emily M. Bender — Language Models and Linguistics (video interview)

Gordon Swobe gordon.swobe at gmail.com
Mon Mar 27 20:34:06 UTC 2023


On Mon, Mar 27, 2023 at 12:04 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

If that's true, how then does the LLM come to learn the spatial meaning of
>>> a word like 'down' when all the model encounters are "meaningless symbols"
>>> which are themselves only defined in terms of other "meaningless symbols"
>>> ad infinitum?
>>>
>>
>> It never learns those meanings, but because it understands the
>> grammatical (syntactic) relationships  between the symbols,
>>
>
> But appropriately constructing a mathematical object suggests it has
> semantic meaning, does it not?
>

It certainly gives us that impression, but on careful analysis of what is
actually going on, we can see that is the human operator who attributes
meaning to those symbols. GPT is merely very good at arranging them in
patterns that have meaning to *us*.


-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230327/049a41ca/attachment.htm>


More information about the extropy-chat mailing list