<div dir="ltr"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><b><i><br>We, the end-users</i>, assign meaning to the words. Some people mistakenly project their own mental processes onto the language model and conclude that it understands the meanings.</b></div></div></blockquote><div> </div><div>This shows again Gordon has no clue about how LLMs work. They do understand because they made a model of language, it is not just a simple algo that measures and assign a probability to a cluster of world. It used stats as a starting point but I have already shown you it is more than that because without a model you cannot handle the combinatorial explosion of assigning probabilities to clusters of words. But of course Gordon ignores all the evidence presented to him. <br><br> LLMs need to have contextual understanding, they need to create an internal model and external model of the world. <br><br>GPT-4 if told to analyze an output it gave, can do that and realize what it did wrong. I have demonstrated this many times when for example it understood that it colored the ground below the horizon in a drawing the same as the sky. The damn thing said, "I apologize, I colored in the wrong region, it should have been all uniform green". It came up with this by itself!<br>Gordon, explain how this is done without understanding. <br>You NEVER NEVER address this sort of evidence. NEVER. <br><br>If a small child had this level of self-awareness we would think it is a very f.... clever child. <br>It really boils my blood that there are people repeating this is not understanding.<br><br>As Ben said before or we then say all our children are parrots and idiots without understanding, and actually all of us, that all the psychological and cognitive tests, exams, different intellectual achievements such as creativity and logical thinking, and having a theory of mind are useless or we have to admit that if AIs that show the same abilities of a human (or better) in different contexts then should be considered as signs of having a mind of their own. <br><br>Anything else is intellectually dishonest and just an ideological position based on fear and misunderstanding. <br><br>Giovanni <br><br><br><br><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 26, 2023 at 5:45 PM Giovanni Santostasi <<a href="mailto:gsantostasi@gmail.com">gsantostasi@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><b>As Tara pointed out so eloquently in another thread, children ground the symbols, sometimes literally putting objects into their mouths to better understand them. This is of course true of conscious people generally. As adults we do not put things in our mouths to understand them, but as conscious beings with subjective experience, we ground symbols/words with experience. This can be subjective experience of external objects, or of inner thoughts and feelings.<br><br>Pure language models have no access to subjective experience and so can only generate symbols from symbols with no understanding or grounding of any or them. I could argue the same is true of multi-model models, but I see no point to it is as so many here believe that even pure language models can somehow access the referents from which words derive their meanings, i.e, that LLMs can somehow ground symbols even with no sensory apparatus whatsoever.</b><br><div><br></div><div>All this is just based on ideology and not careful thinking. It is clear to me now. <br>But let's reply in a logical fashion. <br>1) What is one of the mostly common first words for a child? Moma. But Moma doesn't refer to anything initially for a child. It is a babbling sound child make because some programming in our brain makes us test making sounds randomly to train our vocal cords and the coordination between many anatomical parts that support vocal communication. But somehow the babbling is associated with the mother. Who is doing the grounding? Mostly the mother, not the child. The mother overreacts to these first babbling thinking that he is calling her and self assign this name to herself, which is basically the opposite of grounding a specific sound to a specific intended target, lol. It is mostly in the mother's head. Then the mother teaches the child this is her name and the child learns to associate that sound with the mother. This is such a universal phenomenon that in most languages the name for mom is basically the same. This alone should destroy any simplistic idea that humans learn language or meaning by making a 1 to 1 association with some real object in the physical world. It is much more complex than that and it has many layers of interaction and abstraction both at the individual and at the social level. <br>2) When the mother (notice again even in this case we are talking about a complex interaction between mother and child) points to an object and says APPLE and the child listen to the mother what exactly is going on there? If Gordon was right that there is some grounding process going on there, at leas his very naive understanding of grounding, the association will happen more or less immediately. It doesn't, the mother has to show the apple several times and repeat the name. But then finally it happens the child repeats the name. That repetition doesn't mean the child made the association, it could simply mean it simply repeats the sound the mother makes. In fact, that is an important step in learning a language first the child behaves like a little parrot (being a parrot actually is a good thing to learn languages not bad as Bender seems to claim). The true understanding of the word apples most of the time comes later (there are situations where the mother will point to the apple, make the sound and the child doesn't respond until one day he hold an apple and says apple) when the child sees an apple or holds an apple or tastes an apple and says "APPLE". Is this grounding as Gordon understands it?<br>NO ! Why? Well the mother pointed not at one single apple in this process but many. If it was grounding as naively understood then it would have confused the child more and more to point to different objects and them being called apples. These objects didn't have the same exact size, they maybe had different colors (some red, some yellow), and slightly different tastes, some more sour some more sweet. They are different. So I don't say that what Gordon calls "grounding" is actually the opposite of grounding to be contrarian but because I deeply believe this idea of grounding is bullshit, utter bullshit and in fact it is the core of all our misunderstanding and the fact most of current linguistic doesn't understand language at a higher level that is necessary to understand languages not just in humans but the alien minds of AI. <br>This process cannot be grounding as 1 to 1 one directional association between the object and the meaning of the object. <br>For the child to make the connection it requires understanding what the mother means by pointing to the object and uttering a sound (the 2 are connected somehow that is not a simple idea to process), that the mother doesn't mean this particular object in front of me at this particular time, that a red apple and a yellow apple can be still apples (so the child needs to figure out what they have in common and what they don't and what is not important to identify them as apples), the child needs to understand that if the apple is cut in slices, it is still an apple and so on and on and on. Do you see how bullshit the idea of grounding is? <br>How a cut apple (just thought about this) can be still an apple? But the child somehow knows !<br>It is not the grounding that counts here in learning the language but the high-level abstraction of associating a sound with an object, the fact that different objects can be put in a broad category, that objects can be cut in pieces and be still together as a whole or in part (half an apple is still an apple) the same object, not physically but conceptually and from an abstract point of view. <br>There is no grounding without all this process of abstraction and this process of abstraction is in a way "GOING AWAY FROM GROUNDING", in the sense that it requires literally moving away from the specific sensory experience of this particular object in front of me. The grounding is at most a feedback loop from abstraction to object, from object to abstraction, and so on. It is not at all the main component in giving meaning to language. It is easy to see how one can build a language that is all abstractions and categorization. We have shown this many times when we showed that we can build a symbolic language made of 0 and 1s or how we can build math from the empty set and so on. But what I have discussed above shows that abstraction comes before grounding and it is necessary for grounding to happen. <br>The phenomenon of grounding is really a misnomer. <br>What happens in this exercise of naming things is that it allows us to see connections between things. The objects is not what is important but the connections, the patterns. Now in the case of the mother teaching a language to the child that has to do with objects in the real world, it happens that this language has a survival value because learning patterns and regularities in the natural world, being able to think about them, being able to communicate to others about these patterns ("A wolf is coming !) has an evolutionary advantage so yes, it has an additional value, it is not useless. <br>But the fact that most human language has some relevance to understanding the physical world doesn't show AT ALL that the association with the physical world is required for giving meaning to a language. <br>I don't know how to make this argument more clear and compelling. <br>One could write an entire book on this and maybe even invent an entire language that has nothing to do with real physical objects and it is all self-referential. It is obvious to me the brain did that (anything the brain knows is electrical train spikes anyway, including sensory experience) and that LLMs did that too. <br>But it is clear from my arguments above that Gordon and the linguist are wrong. <br><br>By the way, I pointed out that Umberto Eco, that was one of the most renowned semiotics experts had a similar understanding of the process of grounding and call it the "reference fallacy". For him, a sign (that is what words are) only points to another sign in a never-ending process. The never-ending is not necessary for most communication because at a point we simply decide we think we know enough about what something means (we use basically Bayesian inference in our brains to do that) and LLMs do the same settling on some probabilistic value of the meaning of the words it uses. If something is highly probable probably is true (pun intended). <br><br>Giovanni <br><br><br><br><br><br><br><br><br><br><br></div><div><b><br></b></div><div><b><br></b></div><div><b><br></b></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 26, 2023 at 3:19 PM Adrian Tymes via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Wed, Apr 26, 2023 at 3:05 PM Gordon Swobe <<a href="mailto:gordon.swobe@gmail.com" target="_blank">gordon.swobe@gmail.com</a>> wrote:</div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Wed, Apr 26, 2023 at 3:45 PM Adrian Tymes via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Wed, Apr 26, 2023 at 2:33 PM Gordon Swobe via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><span style="color:rgb(80,0,80)">This is the section of GPTs' reply that I wish everyone here understood:<br><br>> My responses are generated based on patterns in the text and data that I have been trained on, and I do not have the ability to truly</span><br style="color:rgb(80,0,80)"><span style="color:rgb(80,0,80)">> understand the meaning of the words I generate. While I am able to generate text that appears to be intelligent and coherent, it is</span><br style="color:rgb(80,0,80)"><span style="color:rgb(80,0,80)">> important to remember that I do not have true consciousness or subjective experiences.<br></span><br>GPT has no true understanding of the words it generates. It is designed only to generate words and sentences and paragraphs that we, the end-users, will find meaningful. <br><i><br>We, the end-users</i>, assign meaning to the words. Some people mistakenly project their own mental processes onto the language model and conclude that it understands the meanings.</div></div></blockquote><div><br></div><div>How is this substantially different from a child learning to speak from the training data of those around the child? It's not pre-programmed: those surrounded by English speakers learn English; those surrounded by Chinese speakers learn Chinese</div></div></div></blockquote><div><br>As Tara pointed out so eloquently in another thread, children ground the symbols, sometimes literally putting objects into their mouths to better understand them. This is of course true of conscious people generally. As adults we do not put things in our mouths to understand them, but as conscious beings with subjective experience, we ground symbols/words with experience. This can be subjective experience of external objects, or of inner thoughts and feelings.<br><br>Pure language models have no access to subjective experience and so can only generate symbols from symbols with no understanding or grounding of any or them. I could argue the same is true of multi-model models, but I see no point to it is as so many here believe that even pure language models can somehow access the referents from which words derive their meanings, i.e, that LLMs can somehow ground symbols even with no sensory apparatus whatsoever.<br></div></div></div></blockquote><div><br></div><div>Agreed, for the record, but I figured the point needed clarifying. </div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>
</blockquote></div>