[ExI] Emily M. Bender — Language Models and Linguistics (video interview)

Jason Resch jasonresch at gmail.com
Mon Mar 27 03:28:04 UTC 2023


On Sun, Mar 26, 2023, 11:19 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

>
> On Sun, Mar 26, 2023 at 8:42 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> I was hoping most of all you would get a chance to see and respond to
>> this:
>>
>> Page 51 of this PDF: https://arxiv.org/pdf/2303.12712.pdf
>>
>> As this might be the most important and convincing page in the document
>> for the purposes of our discussion. To me this proves without a doubt they
>> GPT-4 has overcome the symbol grounding problem. That is to say, it has
>> convincingly bootstrapped the meaning of the words as they map to reality.
>>
>> This is because *words alone* were sufficient for GPT-4 to construct a
>> mathematical model (a graph with edges and vertices) that is consistent
>> with the layout of rooms within the house, as they were described *purely
>> with words*. Is there any other way to interpret this?
>>
>
> I do not understand why you interpret it as so amazing that words alone
> were sufficient to construct a mathematical model and graph of a house.
> That demonstrates that GPT-4 is intelligent, but the question, as I thought
> we understood in our last exchange, is whether it had a conscious
> understanding of the words it used to construct the model, where
> understanding entails holding the word meanings consciously in mind.
>

No this isn't my point. Ignore the issue of consciousness here.

My point is that this shows the LLM has overcome the symbol grounding
problem. It has somehow learned how to correctly interpret the meanings of
the words.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230326/fb6b56fd/attachment-0001.htm>


More information about the extropy-chat mailing list