<div dir="ltr">I completely agree.<div>Unlike conscious beings, who can experience a redness color quality, and thereby know what the word "redness" means, no abstract bot can know the definition of the word redness.</div><div>They can abstractly represent all that, identical to black and white Marry, but they can't know what redness is like.</div><div>And all intelligent chat bots clearly model this very accurate factual knowledge.</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Apr 7, 2023 at 1:54 PM Gordon Swobe via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Fri, Apr 7, 2023 at 12:27 PM Tara Maya via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div>I stand by what I said before: the least helpful way to know if ChatGPT is conscious is to ask it directly.</div></div></blockquote><div><br>I do not disagree with that, but I find it amusing that according to the state-of-the-art LLM, it is not conscious despite so many people wishing otherwise. All I can really say for certain is that GPT-4's reported analysis of language models is consistent with what I understand and believe to be the case.<br><br>-gts<br><br> </div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>