<div dir="ltr">It is the exact opposite of what Gordon says, actually.<div>NLPs have demonstrated an amazing capability of generating meaning from statistical properties and demonstrated the power of neural networks for pattern recognition.<br>Several years ago AI experts were skeptical that NLP could derive the laws of grammar from these patterns but not only did they achieve exactly that but also derived semantics and context. <br>There is evidence that NLP have emergent properties like a sophisticated theory of mind: <a href="https://www.newscientist.com/article/2359418-chatgpt-ai-passes-test-designed-to-show-theory-of-mind-in-children/">https://www.newscientist.com/article/2359418-chatgpt-ai-passes-test-designed-to-show-theory-of-mind-in-children/</a><br>All these demonstrated that we have all the tools to create a sentient AI. It is a matter of integrating what we have already developed and expanding existing approaches to other type of reasoning as suggested here:<br><a href="https://arxiv.org/abs/2301.06627">https://arxiv.org/abs/2301.06627</a><br>The AI that Blake Lemoine talked with, and claimed to be conscious (that is an ultimate and meta version of LaMDA) is exactly what I'm describing. Lemoine has stated that Google integrated NLP like ChatGPT with Kurzweil hierarchical organization he described in "How to create a mind" and Jeff Hawkins AI architecture described in "On Intelligence". <br>So, yes existing NLP have limitations but also demonstrate that these limitations are a matter of computational power, how the training was performed and being just one of the modules that is necessary for true AGIs. <br>NLPs are just one slice of the brain, not the entire brain, but they do a good job in reproducing that fundamental part of our brain for consciousness. <br>They do understand, even if in a limited way at this point. <br>Giovanni <br><br><br><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Mar 18, 2023 at 2:41 AM Gordon Swobe via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I think those who think LLM AIs like ChatGPT are becoming conscious or sentient like humans fail to understand a very important point: these software applications only predict language. They are very good at predicting which word should come next in a sentence or question, but they have no idea what the words mean. They do not and cannot understand what the words refer to. In linguistic terms, they lack referents.<br><br>Maybe you all already understand this, or maybe you have some reasons why I am wrong.<div><br></div><div>-gts</div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>