[ExI] Emily M. Bender — Language Models and Linguistics (video interview)
Gordon Swobe
gordon.swobe at gmail.com
Mon Mar 27 04:22:23 UTC 2023
On Sun, Mar 26, 2023 at 10:01 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> Do you agree that the LLM must know the spatial meanings of words like
> 'right', 'left", 'up', and 'down'? If not, how else could it create an
> accurate spatial map of the rooms in the house and their relative positions?
>
It knows how the words "left" and "right" relate to each other and to other
symbols related to spatial dimensions, syntactically but with no access to
the referents to give them meaning. The fact that GPT can construct a
coherent essay from what is to it meaningless symbols is to me no less
amazing than the fact that it can create a mathematical model from
meaningless symbols. It's all basically the same amazing process, a giant
web of inter-relationships between meaningless symbols that have no meaning
to it, but which do have meaning to us on account of it was trained on the
forms and patterns of our language.
-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230326/5932343b/attachment.htm>
More information about the extropy-chat
mailing list