[ExI] Another ChatGPT session on qualia
Adrian Tymes
atymes at gmail.com
Wed Apr 26 22:18:11 UTC 2023
On Wed, Apr 26, 2023 at 3:05 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:
> On Wed, Apr 26, 2023 at 3:45 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Wed, Apr 26, 2023 at 2:33 PM Gordon Swobe via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> This is the section of GPTs' reply that I wish everyone here understood:
>>>
>>> > My responses are generated based on patterns in the text and data that
>>> I have been trained on, and I do not have the ability to truly
>>> > understand the meaning of the words I generate. While I am able to
>>> generate text that appears to be intelligent and coherent, it is
>>> > important to remember that I do not have true consciousness or
>>> subjective experiences.
>>>
>>> GPT has no true understanding of the words it generates. It is designed
>>> only to generate words and sentences and paragraphs that we, the end-users,
>>> will find meaningful.
>>>
>>> *We, the end-users*, assign meaning to the words. Some
>>> people mistakenly project their own mental processes onto the language
>>> model and conclude that it understands the meanings.
>>>
>>
>> How is this substantially different from a child learning to speak from
>> the training data of those around the child? It's not pre-programmed:
>> those surrounded by English speakers learn English; those surrounded by
>> Chinese speakers learn Chinese
>>
>
> As Tara pointed out so eloquently in another thread, children ground the
> symbols, sometimes literally putting objects into their mouths to better
> understand them. This is of course true of conscious people generally. As
> adults we do not put things in our mouths to understand them, but as
> conscious beings with subjective experience, we ground symbols/words with
> experience. This can be subjective experience of external objects, or of
> inner thoughts and feelings.
>
> Pure language models have no access to subjective experience and so can
> only generate symbols from symbols with no understanding or grounding of
> any or them. I could argue the same is true of multi-model models, but I
> see no point to it is as so many here believe that even pure language
> models can somehow access the referents from which words derive their
> meanings, i.e, that LLMs can somehow ground symbols even with no sensory
> apparatus whatsoever.
>
Agreed, for the record, but I figured the point needed clarifying.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230426/dc2ac83c/attachment.htm>
More information about the extropy-chat
mailing list