<div dir="ltr"><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:#000000"><div dir="auto"><br class="gmail-Apple-interchange-newline">Dictionaries do not actually contain or know the meanings of words, and I see no reason to think LLMs are any different.-gts</div><div dir="auto"><br></div><div>As John would say: we have to have examples to really understand meaning, But the words we are talking about are abstractions without any clear objective referent, so we and the AIs and the dictionary are reduced to synonyms for 'meaning' and 'understanding' etc. In science we use operational definitions to try to solve this problem. bill w </div><div dir="auto"><br style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small"></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Mar 19, 2023 at 1:05 AM Gordon Swobe via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div>Consider that LLMs are like dictionaries. A complete dictionary can give you the definition of any word, but that definition is in terms of other words in the same dictionary. If you want to understand *meaning* of any word definition, you must look up the definitions of each word in the definition, and then look up each of the words in those definitions, which leads to an infinite regress. </div><div dir="auto"><br></div><div dir="auto">Dictionaries do not actually contain or know the meanings of words, and I see no reason to think LLMs are any different.</div><div dir="auto"><br></div><div dir="auto">-gts</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"> Sat, Mar 18, 2023, 3:39 AM Gordon Swobe <<a href="mailto:gordon.swobe@gmail.com" target="_blank">gordon.swobe@gmail.com</a>> wrote:</div><div dir="auto"><div class="gmail_quote" dir="auto"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I think those who think LLM AIs like ChatGPT are becoming conscious or sentient like humans fail to understand a very important point: these software applications only predict language. They are very good at predicting which word should come next in a sentence or question, but they have no idea what the words mean. They do not and cannot understand what the words refer to. In linguistic terms, they lack referents.<br><br>Maybe you all already understand this, or maybe you have some reasons why I am wrong.<div><br></div><div>-gts</div></div>
</blockquote></div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>