<div dir="ltr"><div dir="ltr">On Mon, Apr 17, 2023 at 5:17 PM Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><br>
So, what about 'grounding'? These object models in our brains are
really the 'things' that we are referring to when we say 'potato' or
'apple'. You could say that the words are 'grounded' in the object
models. But they are in our brains! </div></blockquote><div><br>Yes. I agree, and this is not the first time I have needed to clarify myself on this point. When I first mentioned referents here on ExI some weeks ago, I made the mistake of assuming that everyone would know what I meant, and so my language was not as precise as it ought to have been. Since then, I have tried to be more precise and to clarify that ultimately referents exist, as you say, in our brains. I wrote for example that we have referents in our memories and in our dreams; that even if we dream of pink unicorns, they are still referents. Optical illusions, hallucinations, abstract ideas, intuitively known mathematical truths, all these purely subjective phenomena are referents no less so than the direct perception of an apple. <br><br>With respect to symbol grounding, we need to have some kind of referent in mind to ground whatever symbol we are reading or writing or speaking or hearing. Otherwise, it is meaningless gibberish. The problem for the LLM is that from its perspective, every word in the entire corpus is a bit of meaningless gibberish. It can do no more than analyze how all the bits of gibberish relate to one another in terms of patterns and statistics. As it turns out, with a large enough corpus and enough processing power and some additional training by humans, this is enough to put on a pretty good show. <br><br>-gts<br><br> </div></div></div>