[ExI] Another ChatGPT session on qualia

Giovanni Santostasi gsantostasi at gmail.com
Thu Apr 27 01:03:48 UTC 2023


>
>
> *We, the end-users, assign meaning to the words. Some people mistakenly
> project their own mental processes onto the language model and conclude
> that it understands the meanings.*
>

This shows again Gordon has no clue about how LLMs work. They do understand
because they made a model of language, it is not just a simple algo that
measures and assign a probability to a cluster of world. It used stats as a
starting point but I have already shown you it is more than that because
without a model you cannot handle the combinatorial explosion of
assigning probabilities to clusters of words. But of course Gordon ignores
all the evidence presented to him.

 LLMs need to have contextual understanding, they need to create an
internal model and external model of the world.

GPT-4 if told to analyze an output it gave, can do that and realize what it
did wrong. I have demonstrated this many times when for example it
understood that it colored the ground below the horizon in a drawing the
same as the sky. The damn thing said, "I apologize, I colored in the wrong
region, it should have been all uniform green". It came up with this by
itself!
Gordon, explain how this is done without understanding.
You NEVER NEVER address this sort of evidence. NEVER.

If a small child had this level of self-awareness we would think it is a
very f.... clever child.
It really boils my blood that there are people repeating this is not
understanding.

As Ben said before or we then say all our children are parrots and idiots
without understanding, and actually all of us, that all the psychological
and cognitive tests, exams, different intellectual achievements such as
creativity and logical thinking, and having a theory of mind are useless or
we have to admit that if AIs that show the same abilities of a human (or
better) in different contexts then should be considered as signs of having
a mind of their own.

Anything else is intellectually dishonest and just an ideological position
based on fear and misunderstanding.

Giovanni





On Wed, Apr 26, 2023 at 5:45 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

>
>
> *As Tara pointed out so eloquently in another thread, children ground the
> symbols, sometimes literally putting objects into their mouths to better
> understand them. This is of course true of conscious people generally. As
> adults we do not put things in our mouths to understand them, but as
> conscious beings with subjective experience, we ground symbols/words with
> experience. This can be subjective experience of external objects, or of
> inner thoughts and feelings.Pure language models have no access to
> subjective experience and so can only generate symbols from symbols with no
> understanding or grounding of any or them. I could argue the same is true
> of multi-model models, but I see no point to it is as so many here believe
> that even pure language models can somehow access the referents from which
> words derive their meanings, i.e, that LLMs can somehow ground symbols even
> with no sensory apparatus whatsoever.*
>
> All this is just based on ideology and not careful thinking. It is clear
> to me now.
> But let's reply in a logical fashion.
> 1) What is one of the mostly common first words for a child? Moma. But
> Moma doesn't refer to anything initially for a child. It is a babbling
> sound child make because some programming in our brain makes us test making
> sounds randomly to train our vocal cords and the coordination between many
> anatomical parts that support vocal communication. But somehow the babbling
> is associated with the mother. Who is doing the grounding? Mostly the
> mother, not the child. The mother overreacts to these first babbling
> thinking that he is calling her and self assign this name to herself, which
> is basically the opposite of grounding a specific sound to a specific
> intended target, lol. It is mostly in the mother's head. Then the mother
> teaches the child this is her name and the child learns to associate that
> sound with the mother. This is such a universal phenomenon that in most
> languages the name for mom is basically the same. This alone should destroy
> any simplistic idea that humans learn language or meaning by making a 1 to
> 1 association with some real object in the physical world. It is much more
> complex than that and it has many layers of interaction and abstraction
> both at the individual and at the social level.
> 2) When the mother (notice again even in this case we are talking about a
> complex interaction between mother and child) points to an object and says
> APPLE and the child listen to the mother what exactly is going on there? If
> Gordon was right that there is some grounding process going on there, at
> leas his very naive understanding of grounding, the association will happen
> more or less immediately. It doesn't, the mother has to show the apple
> several times and repeat the name. But then finally it happens the child
> repeats the name. That repetition doesn't mean the child made the
> association, it could simply mean it simply repeats the sound the mother
> makes. In fact, that is an important step in learning a language first the
> child behaves like a little parrot (being a parrot actually is a good thing
> to learn languages not bad as Bender seems to claim). The true
> understanding of the word apples most of the time comes later (there are
> situations where the mother will point to the apple, make the sound and the
> child doesn't respond until one day he hold an apple and says apple) when
> the child sees an apple or holds an apple or tastes an apple and says
> "APPLE". Is this grounding as Gordon understands it?
> NO ! Why? Well the mother pointed not at one single apple in this process
> but many. If it was grounding as naively understood then it would have
> confused the child more and more to point to different objects and them
> being called apples. These objects didn't have the same exact size, they
> maybe had different colors (some red, some yellow), and slightly different
> tastes, some more sour some more sweet. They are different. So I don't say
> that what Gordon calls "grounding" is actually the opposite of grounding to
> be contrarian but because I deeply believe this idea of grounding is
> bullshit, utter bullshit and in fact it is the core of all our
> misunderstanding and the fact most of current linguistic doesn't understand
> language at a higher level that is necessary to understand languages not
> just in humans but the alien minds of AI.
> This process cannot be grounding as 1 to 1 one directional association
> between the object and the meaning of the object.
> For the child to make the connection it requires understanding what the
> mother means by pointing to the object and uttering a sound (the 2 are
> connected somehow that is not a simple idea to process), that the mother
> doesn't mean this particular object in front of me at this particular time,
> that a red apple and a yellow apple can be still apples (so the child needs
> to figure out what they have in common and what they don't and what is not
> important to identify them as apples), the child needs to understand that
> if the apple is cut in slices, it is still an apple and so on and on and
> on. Do you see how bullshit the idea of grounding is?
> How a cut apple (just thought about this) can be still an apple? But the
> child somehow knows !
> It is not the grounding that counts here in learning the language but the
> high-level abstraction of associating a sound with an object, the fact that
> different objects can be put in a broad category, that objects can be cut
> in pieces and be still together as a whole or in part (half an apple is
> still an apple) the same object, not physically but conceptually and from
> an abstract point of view.
> There is no grounding without all this process of abstraction and this
> process of abstraction is in a way "GOING AWAY FROM GROUNDING", in the
> sense that it requires literally moving away from the specific sensory
> experience of this particular object in front of me. The grounding is at
> most a feedback loop from abstraction to object, from object to
> abstraction, and so on. It is not at all the main component in giving
> meaning to language. It is easy to see how one can build a language that is
> all abstractions and categorization. We have shown this many times when we
> showed that we can build a symbolic language made of 0 and 1s or how we can
> build math from the empty set and so on. But what I have discussed above
> shows that abstraction comes before grounding and it is necessary for
> grounding to happen.
> The phenomenon of grounding is really a misnomer.
> What happens in this exercise of naming things is that it allows us to see
> connections between things. The objects is not what is important but the
> connections, the patterns. Now in the case of the mother teaching a
> language to the child that has to do with objects in the real world, it
> happens that this language has a survival value because learning patterns
> and regularities in the natural world, being able to think about them,
> being able to communicate to others about these patterns ("A wolf is coming
> !) has an evolutionary advantage so yes, it has an additional value, it is
> not useless.
> But the fact that most human language has some relevance to understanding
> the physical world doesn't show AT ALL that the association with the
> physical world is required for giving meaning to a language.
> I don't know how to make this argument more clear and compelling.
> One could write an entire book on this and maybe even invent an entire
> language that has nothing to do with real physical objects and it is all
> self-referential. It is obvious to me the brain did that (anything the
> brain knows is electrical train spikes anyway, including sensory
> experience) and that LLMs did that too.
> But it is clear from my arguments above that Gordon and the linguist are
> wrong.
>
> By the way, I pointed out that Umberto Eco, that was one of the most
> renowned semiotics experts had a similar understanding of the process of
> grounding and call it the "reference fallacy". For him, a sign (that is
> what words are) only points to another sign in a never-ending process. The
> never-ending is not necessary for most communication because at a point we
> simply decide we think we know enough about what something means (we use
> basically Bayesian inference in our brains to do that) and LLMs do the same
> settling on some probabilistic value of the meaning of the words it uses.
> If something is highly probable probably is true (pun intended).
>
> Giovanni
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Wed, Apr 26, 2023 at 3:19 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Wed, Apr 26, 2023 at 3:05 PM Gordon Swobe <gordon.swobe at gmail.com>
>> wrote:
>>
>>> On Wed, Apr 26, 2023 at 3:45 PM Adrian Tymes via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> On Wed, Apr 26, 2023 at 2:33 PM Gordon Swobe via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>> This is the section of GPTs' reply that I wish everyone here
>>>>> understood:
>>>>>
>>>>> > My responses are generated based on patterns in the text and data
>>>>> that I have been trained on, and I do not have the ability to truly
>>>>> > understand the meaning of the words I generate. While I am able to
>>>>> generate text that appears to be intelligent and coherent, it is
>>>>> > important to remember that I do not have true consciousness or
>>>>> subjective experiences.
>>>>>
>>>>> GPT has no true understanding of the words it generates. It is
>>>>> designed only to generate words and sentences and paragraphs that we, the
>>>>> end-users, will find meaningful.
>>>>>
>>>>> *We, the end-users*, assign meaning to the words. Some
>>>>> people mistakenly project their own mental processes onto the language
>>>>> model and conclude that it understands the meanings.
>>>>>
>>>>
>>>> How is this substantially different from a child learning to speak from
>>>> the training data of those around the child?  It's not pre-programmed:
>>>> those surrounded by English speakers learn English; those surrounded by
>>>> Chinese speakers learn Chinese
>>>>
>>>
>>> As Tara pointed out so eloquently in another thread, children ground the
>>> symbols, sometimes literally putting objects into their mouths to better
>>> understand them. This is of course true of conscious people generally. As
>>> adults we do not put things in our mouths to understand them, but as
>>> conscious beings with subjective experience, we ground symbols/words with
>>> experience. This can be subjective experience of external objects, or of
>>> inner thoughts and feelings.
>>>
>>> Pure language models have no access to subjective experience and so can
>>> only generate symbols from symbols with no understanding or grounding of
>>> any or them. I could argue the same is true of multi-model models, but I
>>> see no point to it is as so many here believe that even pure language
>>> models can somehow access the referents from which words derive their
>>> meanings, i.e, that LLMs can somehow ground symbols even with no sensory
>>> apparatus whatsoever.
>>>
>>
>> Agreed, for the record, but I figured the point needed clarifying.
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230426/5aa65d81/attachment.htm>


More information about the extropy-chat mailing list