[ExI] Emily M. Bender — Language Models and Linguistics (video interview)

Jason Resch jasonresch at gmail.com
Mon Mar 27 03:57:32 UTC 2023


On Sun, Mar 26, 2023, 11:50 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

>
>
> On Sun, Mar 26, 2023 at 9:29 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> I do not understand why you interpret it as so amazing that words alone
>>> were sufficient to construct a mathematical model and graph of a house.
>>> That demonstrates that GPT-4 is intelligent, but the question, as I thought
>>> we understood in our last exchange, is whether it had a conscious
>>> understanding of the words it used to construct the model, where
>>> understanding entails holding the word meanings consciously in mind.
>>>
>>
>> No this isn't my point. Ignore the issue of consciousness here.
>>
>> My point is that this shows the LLM has overcome the symbol grounding
>> problem. It has somehow learned how to correctly interpret the meanings of
>> the words.
>>
>
> I don't see how creating a mathematical model from words proves anything
> of the sort. As one of my other detractors (Giovanni, I think) pointed out
> correctly, mathematics is another form of language. How does the fact that
> GPT "knows" that "one plus one equals two" can also be expressed
> numerically as "1+1=2" tell us anything about the grounding problem, which
> entails having access to referents outside of language?
>

Do you agree that the LLM must know the spatial meanings of words like
'right', 'left", 'up', and 'down'? If not, how else could it create an
accurate spatial map of the rooms in the house and their relative positions?

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230326/244cd7ad/attachment.htm>


More information about the extropy-chat mailing list