[ExI] Emily M. Bender — Language Models and Linguistics (video interview)

Jason Resch jasonresch at gmail.com
Mon Mar 27 21:00:02 UTC 2023


On Mon, Mar 27, 2023, 4:34 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

>
>
> On Mon, Mar 27, 2023 at 12:04 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> If that's true, how then does the LLM come to learn the spatial meaning of
>>>> a word like 'down' when all the model encounters are "meaningless symbols"
>>>> which are themselves only defined in terms of other "meaningless symbols"
>>>> ad infinitum?
>>>>
>>>
>>> It never learns those meanings, but because it understands the
>>> grammatical (syntactic) relationships  between the symbols,
>>>
>>
>> But appropriately constructing a mathematical object suggests it has
>> semantic meaning, does it not?
>>
>
> It certainly gives us that impression, but on careful analysis of what is
> actually going on, we can see that is the human operator who attributes
> meaning to those symbols. GPT is merely very good at arranging them in
> patterns that have meaning to *us*.
>


I think that's why this particular example is so important to escape that
trap, because mathematical structures are objective. Which vertices are
connected by which edges isn't something that can be faked or
misinterpreted, it simply is.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230327/bbb86e70/attachment.htm>


More information about the extropy-chat mailing list