[ExI] Emily M. Bender — Language Models and Linguistics (video interview)
Jason Resch
jasonresch at gmail.com
Mon Mar 27 04:37:29 UTC 2023
On Mon, Mar 27, 2023, 12:23 AM Gordon Swobe <gordon.swobe at gmail.com> wrote:
>
>
> On Sun, Mar 26, 2023 at 10:01 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Do you agree that the LLM must know the spatial meanings of words like
>> 'right', 'left", 'up', and 'down'? If not, how else could it create an
>> accurate spatial map of the rooms in the house and their relative positions?
>>
>
>
> It knows how the words "left" and "right" relate to each other and to
> other symbols related to spatial dimensions, syntactically but with no
> access to the referents to give them meaning.
>
The fact that GPT can construct a coherent essay from what is to it
> meaningless symbols is to me no less amazing than the fact that it can
> create a mathematical model from meaningless symbols. It's all basically
> the same amazing process, a giant web of inter-relationships between
> meaningless symbols that have no meaning to it, but which do have meaning
> to us on account of it was trained on the forms and patterns of our
> language.
>
But your (and I think Bender's) assertion was that there was no possible
way to learn any meaning whatsoever without a Rosetta stone, or without
programming in some model of reality from the start. If that's true, how
then does the LLM come to learn the spatial meaning of a word like 'down'
when all the model encounters are "meaningless symbols" which are
themselves only defined in terms of other "meaningless symbols" ad
infinitum?
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230327/96e47a6f/attachment.htm>
More information about the extropy-chat
mailing list