<div dir="auto"><div>Consider that LLMs are like dictionaries. A complete dictionary can give you the definition of any word, but that definition is in terms of other words in the same dictionary. If you want to understand *meaning* of any word definition, you must look up the definitions of each word in the definition, and then look up each of the words in those definitions, which leads to an infinite regress. </div><div dir="auto"><br></div><div dir="auto">Dictionaries do not actually contain or know the meanings of words, and I see no reason to think LLMs are any different.</div><div dir="auto"><br></div><div dir="auto">-gts</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"> Sat, Mar 18, 2023, 3:39 AM Gordon Swobe <<a href="mailto:gordon.swobe@gmail.com">gordon.swobe@gmail.com</a>> wrote:</div><div dir="auto"><div class="gmail_quote" dir="auto"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">I think those who think LLM AIs like ChatGPT are becoming conscious or sentient like humans fail to understand a very important point: these software applications only predict language. They are very good at predicting which word should come next in a sentence or question, but they have no idea what the words mean. They do not and cannot understand what the words refer to. In linguistic terms, they lack referents.<br><br>Maybe you all already understand this, or maybe you have some reasons why I am wrong.<div><br></div><div>-gts</div></div>
</blockquote></div></div></div>