<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=utf-8"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
span.EmailStyle18
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-ligatures:standardcontextual;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=EN-US link="#0563C1" vlink="#954F72" style='word-wrap:break-word'><div class=WordSection1><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><div style='border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in 0in 0in'><p class=MsoNormal><b>From:</b> extropy-chat <extropy-chat-bounces@lists.extropy.org> <b>On Behalf Of </b>Gordon Swobe via extropy-chat<br><b>Subject:</b> [ExI] LLM's cannot be concious<o:p></o:p></p></div><p class=MsoNormal><o:p> </o:p></p><div><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>>…<o:p></o:p></p><p class=MsoNormal>I think those who think LLM AIs like ChatGPT are becoming conscious or sentient like humans fail to understand a very important point: these software applications only predict language. They are very good at predicting which word should come next in a sentence or question, but they have no idea what the words mean. They do not and cannot understand what the words refer to. In linguistic terms, they lack referents.<br><br>Maybe you all already understand this, or maybe you have some reasons why I am wrong.<o:p></o:p></p><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>-gts<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>I think you are right gts. I studied a bunch of experiments where chatbots were talking to each other, looking for any indication that there was any invention of ideas, the way two humans invent ideas when they talk to each other. The two-bot discussions never blossomed into anything interesting that I have found, but rather devolved into the one-line fluff that often comes with a forced discussion with a dull unimaginative person at a party you don’t have the option of leaving.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>These chatbots are not yet ready to train each other and cause the singularity. But there might be an upside to that: humans continue to survive as a species. That’s a good thing methinks. I like us.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>spike<o:p></o:p></p></div></div></div></body></html>