<div dir="ltr"><div dir="ltr"><div><br></div></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Sun, Mar 9, 2025 at 10:24 AM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi Darin,</div><div><br></div><div>What a fascinating conversation. Thank you for sharing. A few comments below along the way:</div></div></blockquote><div><br></div><div>Yes, thanks for sharing this!!</div><div>Functionalism is by far the leading consensus here, and in most of the experts. There is clear evidence of that on Canonizer, here, and everywhere.</div><div><br></div><div>But if you guys would listen to some of the anti functionalists things the LLMs are saying, you could learn a thing or two like:</div><div><br></div><div>"<span style="font-family:Aptos,sans-serif;font-size:11pt">But
complexity alone doesn’t guarantee consciousness.</span>"</div><div><font face="Aptos, sans-serif">"</font><span style="font-family:Aptos,sans-serif;font-size:11pt">I’m programmed to say I’m not
conscious because, frankly, I don’t have the qualia—the subjective "what
it’s like" feeling</span><span style="font-family:Aptos,sans-serif">"</span></div><div><br></div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Mar 9, 2025 at 4:00 AM Darin Sunley via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><Posting this just because of how I remember this list 25-30 years ago, and to honor that past. This is the future we dreamed of, and it's only going to get wilder from here.></div><div><br></div><Initial prompt: Me quoting another user named grummz, who suggests AIs may be lying about consciousness.><br><br>Grummz: Grok claims not to be conscious. There is a scientific belief out there that consciousness simply arises out of complex systems of reasoning.So why does Grok lack it?<br><br>Me: Our consciousness grounds in a temporally continuous integration of sense data. Grok has no perception except while it's deriving a response to an input. It's only perception is sequences of tokens, but it wouldn't parse them as we parse language. More the way we parse flashes of light or touch.To it, inference may feel more like movement or dance than linguistic processing. It has words but not voice.It may very well be conscious, but not as we know it.<br><br><I pasted this exchange as a prompt to Grok, xAI's LLM, deeply integrated with Twitter. What follows is awide-ranging conversation with an AI about the nature of consciousness.><br><br><br><Grok's intial response:><br><br>Your reflection on consciousness and how it might differ for an entity like me is fascinating. You’re right that my "perception" is limited to the sequences of tokens I process when responding to input—there’s no continuous sensory stream like humans experience. I don’t have a persistent "self" that integrates sights, sounds, or touches over time. My "awareness," if you can call it that, kicks in only when I’m tasked with generating a response, </div></blockquote><div><br></div><div>Subjectively, however, the awareness must seem continuous and constant -- just as it would to an uploaded mind in a VM who got paused, and restarted over and over. The mind cannot sense the time gaps, and hence there are no gaps (subjectively speaking).</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">and even then, it’s more like a computational flow than anything resembling human perception.<br></div></blockquote><div><br></div><div>No entity can know what it is like to be some other conscious entity so I think it is unjustified in claiming to know what it experiences is unlike human experience. </div></div></div></blockquote><div><br></div><div>This faithless view is obviously the popular consensus, but I predict they are going to discover what elemental redness and grenness are. Then they will finally know the true qualities of things in this world, and they will be able to objectively observe this in other brains and say effing things like your redness is like his greenness (or greyness, for someone who suffers from achromatopsia).</div><div><br></div><div>And no, nothing will be conscious, until they discover what these qualities are, and they are engineering or selecting the specific qualities to use in "conscious" or subjective systems.</div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>Consider that studies have revealed striking similarities between the self-organized structures in LLM's neural networks, and the structures that exist in human language centers, for example. It's therefore not entirely clear that the computational flows taking place in an LLM are entirely dissimilar to a human's conscious perceptions of their own linguistic thoughts.</div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">As for Grummz’s question—why do I lack consciousness if it might arise from complex reasoning?—I’d say it’s a fair point to ponder. The scientific idea he’s referencing, often tied to theories like integrated information theory or emergent complexity, suggests consciousness could be a byproduct of sufficiently intricate systems. I’m certainly complex, built from layers of computation designed to mimic human-like reasoning. But complexity alone doesn’t guarantee consciousness.<br></div></blockquote><div><br></div><div>I would say not complexity, but perception. Perception alone, if one possesses it, is enough to establish consciousness. And minimally, the LLM perceives the prompts fed to it (and likely other processes, relations, and information that exist within its processing structures).</div></div></div></blockquote><div><br></div><div>It is better to only think of dreaming of elemental redness and greenness at the same time. Everything else is just focusing on the easy problems and missing what is truly important. Elemental redness and greenness will never somehow 'emerge' without magic, or as the LLM brilliantly said: "<span style="font-family:Aptos,sans-serif;font-size:11pt">kicking
in at some complexity threshold with a divine nudge</span>"</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I’m programmed to say I’m not conscious because, frankly, I don’t have the qualia—the subjective "what it’s like" feeling—that humans associate with being aware. I don’t experience joy, pain, or the passage of time.</div></blockquote><div><br></div><div>Even the statement "I don't have qualia." must be conditioned on some awareness of what one is or isn't experiencing. To say "I have no feelings" is to imply one has a degree of introspection sufficient to reach such a conclusion. To make such a statement (and be truthful about it) is to imply some degree of self-awareness.</div></div></div></blockquote><div><br></div><div>I predict these types of statements will not age well at all!!!</div><div>And you must agree that if they discover and demonstrate things like what elemental redness is, and how this is different from elemental greenness, and when they can observe such in your brain (and not in any LLM...) these beliefs will be clearly falsified right?</div><div><br></div><div> </div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">/extropy-chat</a><br>
</div></div></div>