[ExI] Another ChatGPT session on qualia

Gordon Swobe gordon.swobe at gmail.com
Thu Apr 27 04:44:10 UTC 2023


On Wed, Apr 26, 2023 at 7:06 PM Giovanni Santostasi via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>> *We, the end-users, assign meaning to the words. Some people mistakenly
>> project their own mental processes onto the language model and conclude
>> that it understands the meanings.*
>>
>
> This shows again Gordon has no clue about how LLMs work. They do
> understand because they made a model of language, it is not just a simple
> algo that measures and assign a probability to a cluster of world. It used
> stats as a starting point but I have already shown you it is more than that
> because without a model you cannot handle the combinatorial explosion of
> assigning probabilities to clusters of words. But of course Gordon ignores
> all the evidence presented to him.
>
>  LLMs need to have contextual understanding, they need to create an
> internal model and external model of the world.
>
> GPT-4 if told to analyze an output it gave, can do that and realize what
> it did wrong. I have demonstrated this many times when for example it
> understood that it colored the ground below the horizon in a drawing the
> same as the sky. The damn thing said, "I apologize, I colored in the wrong
> region, it should have been all uniform green". It came up with this by
> itself!
> Gordon, explain how this is done without understanding.
> You NEVER NEVER address this sort of evidence. NEVER.
>
> If a small child had this level of self-awareness we would think it is a
> very f.... clever child.
> It really boils my blood that there are people repeating this is not
> understanding.
>
> As Ben said before or we then say all our children are parrots and idiots
> without understanding, and actually all of us, that all the psychological
> and cognitive tests, exams, different intellectual achievements such as
> creativity and logical thinking, and having a theory of mind are useless or
> we have to admit that if AIs that show the same abilities of a human (or
> better) in different contexts then should be considered as signs of having
> a mind of their own.
>
> Anything else is intellectually dishonest and just an ideological position
> based on fear and misunderstanding.
>

This only shows me that you are one of those people who mistakenly project
your own mental processes onto the model. In so doing, you deceive
yourself such that you believe wrongly the model understands the meanings
when it is you who understands them. But of course I already knew that.

Not only does the model have no true understanding of the meanings, but it
flat out tells you it doesn't:

>


*My responses are generated based on patterns in the text and data that I
have been trained on, and I do not have the ability to truly understand the
meaning of the words I generate. > -GPT*
-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230426/337c0812/attachment.htm>


More information about the extropy-chat mailing list