[ExI] Another ChatGPT session on qualia

Giovanni Santostasi gsantostasi at gmail.com
Thu Apr 27 05:03:37 UTC 2023


Giovanni: We don't know how humans understand. The question is not if a LLM
understands as a human but if it understand at all.

GPT-4: You are right that the understanding of human cognition is still an
ongoing area of research, and there is much that remains to be discovered.
Comparing human understanding to that of a language model like me can be
challenging.

>From a technical perspective, LLMs can be said to understand the text to a
certain extent. My ability to process text and provide relevant responses
is based on the recognition of patterns, relationships, and statistical
associations in the data I have been trained on.



*1) How do we know humans do not do the same?2) Who cares if humans do not
the same. Birds use feathers and flapping wings to fly, airplanes do not.
They both fly. *





On Wed, Apr 26, 2023 at 9:55 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

> It is so ridiculous Gordon, how can it tell you it doesn't understand if
> it cannot understand?
> Do you know the paradox of the Cretan liar? Epimenides of Crete says that
> Cretans are always liars, but he was from Crete. So did he lie about Cretan
> being all liars?
> Come on man.
> Please do not evade my question. Explain how it arrived at the conclusion
> that it made a mistake when it colored the ground below the horizon of the
> same color of the sky and then it realized it was a mistake and it
> corrected the mistake. Please go ahead and explain how this can be done
> without a deep understanding of the context of the conversation, of spatial
> relationship, of conventions used in a very stylized drawing like this.
> Please go ahead.
> If you don't then I think every claim that GPT-4 lacks understanding is
> completely baseless.
>
> LOL
>
> On Wed, Apr 26, 2023 at 9:44 PM Gordon Swobe <gordon.swobe at gmail.com>
> wrote:
>
>>
>> On Wed, Apr 26, 2023 at 7:06 PM Giovanni Santostasi via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>>> *We, the end-users, assign meaning to the words. Some people mistakenly
>>>> project their own mental processes onto the language model and conclude
>>>> that it understands the meanings.*
>>>>
>>>
>>> This shows again Gordon has no clue about how LLMs work. They do
>>> understand because they made a model of language, it is not just a simple
>>> algo that measures and assign a probability to a cluster of world. It used
>>> stats as a starting point but I have already shown you it is more than that
>>> because without a model you cannot handle the combinatorial explosion of
>>> assigning probabilities to clusters of words. But of course Gordon ignores
>>> all the evidence presented to him.
>>>
>>>  LLMs need to have contextual understanding, they need to create an
>>> internal model and external model of the world.
>>>
>>> GPT-4 if told to analyze an output it gave, can do that and realize what
>>> it did wrong. I have demonstrated this many times when for example it
>>> understood that it colored the ground below the horizon in a drawing the
>>> same as the sky. The damn thing said, "I apologize, I colored in the wrong
>>> region, it should have been all uniform green". It came up with this by
>>> itself!
>>> Gordon, explain how this is done without understanding.
>>> You NEVER NEVER address this sort of evidence. NEVER.
>>>
>>> If a small child had this level of self-awareness we would think it is a
>>> very f.... clever child.
>>> It really boils my blood that there are people repeating this is not
>>> understanding.
>>>
>>> As Ben said before or we then say all our children are parrots and
>>> idiots without understanding, and actually all of us, that all the
>>> psychological and cognitive tests, exams, different intellectual
>>> achievements such as creativity and logical thinking, and having a theory
>>> of mind are useless or we have to admit that if AIs that show the same
>>> abilities of a human (or better) in different contexts then should be
>>> considered as signs of having a mind of their own.
>>>
>>> Anything else is intellectually dishonest and just an ideological
>>> position based on fear and misunderstanding.
>>>
>>
>> This only shows me that you are one of those people who mistakenly
>> project your own mental processes onto the model. In so doing, you deceive
>> yourself such that you believe wrongly the model understands the meanings
>> when it is you who understands them. But of course I already knew that.
>>
>> Not only does the model have no true understanding of the meanings, but
>> it flat out tells you it doesn't:
>>
>> >
>>
>>
>> *My responses are generated based on patterns in the text and data that I
>> have been trained on, and I do not have the ability to truly understand the
>> meaning of the words I generate. > -GPT*
>> -gts
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230426/ca6d9ab1/attachment.htm>


More information about the extropy-chat mailing list