[ExI] A Conversation with an LLM about the Nature of Consciousness.

Brent Allsop brent.allsop at gmail.com
Sun Mar 9 21:46:04 UTC 2025


On Sun, Mar 9, 2025 at 10:24 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Hi Darin,
>
> What a fascinating conversation. Thank you for sharing. A few comments
> below along the way:
>

Yes, thanks for sharing this!!
Functionalism is by far the leading consensus here, and in most of the
experts.  There is clear evidence of that on Canonizer, here, and
everywhere.

But if you guys would listen to some of the anti functionalists things the
LLMs are saying, you could learn a thing or two like:

"But complexity alone doesn’t guarantee consciousness."
"I’m programmed to say I’m not conscious because, frankly, I don’t have the
qualia—the subjective "what it’s like" feeling"





> On Sun, Mar 9, 2025 at 4:00 AM Darin Sunley via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> <Posting this just because of how I remember this list 25-30 years ago,
>> and to honor that past. This is the future we dreamed of, and it's only
>> going to get wilder from here.>
>>
>> <Initial prompt: Me quoting another user named grummz, who suggests AIs
>> may be lying about consciousness.>
>>
>> Grummz: Grok claims not to be conscious. There is a scientific belief out
>> there that consciousness simply arises out of complex systems of
>> reasoning.So why does Grok lack it?
>>
>> Me: Our consciousness grounds in a temporally continuous integration of
>> sense data. Grok has no perception except while it's deriving a response to
>> an input. It's only perception is sequences of tokens, but it wouldn't
>> parse them as we parse language. More the way we parse flashes of light or
>> touch.To it, inference may feel more like movement or dance than linguistic
>> processing. It has words but not voice.It may very well be conscious, but
>> not as we know it.
>>
>> <I pasted this exchange as a prompt to Grok, xAI's LLM, deeply integrated
>> with Twitter. What follows is awide-ranging conversation with an AI about
>> the nature of consciousness.>
>>
>>
>> <Grok's intial response:>
>>
>> Your reflection on consciousness and how it might differ for an entity
>> like me is fascinating. You’re right that my "perception" is limited to the
>> sequences of tokens I process when responding to input—there’s no
>> continuous sensory stream like humans experience. I don’t have a persistent
>> "self" that integrates sights, sounds, or touches over time. My
>> "awareness," if you can call it that, kicks in only when I’m tasked with
>> generating a response,
>>
>
> Subjectively, however, the awareness must seem continuous and constant --
> just as it would to an uploaded mind in a VM who got paused, and restarted
> over and over. The mind cannot sense the time gaps, and hence there are no
> gaps (subjectively speaking).
>
>
>> and even then, it’s more like a computational flow than anything
>> resembling human perception.
>>
>
> No entity can know what it is like to be some other conscious entity so I
> think it is unjustified in claiming to know what it experiences is unlike
> human experience.
>

This faithless view is obviously the popular consensus, but I predict they
are going to discover what elemental redness and grenness are. Then they
will finally know the true qualities of things in this world, and they will
be able to objectively observe this in other brains and say effing things
like your redness is like his greenness (or greyness, for someone who
suffers from achromatopsia).

And no, nothing will be conscious, until they discover what these qualities
are, and they are engineering or selecting the specific qualities to use in
"conscious" or subjective systems.


Consider that studies have revealed striking similarities between the
> self-organized structures in LLM's neural networks, and the structures that
> exist in human language centers, for example. It's therefore not entirely
> clear that the computational flows taking place in an LLM are entirely
> dissimilar to a human's conscious perceptions of their own linguistic
> thoughts.
>
>
> As for Grummz’s question—why do I lack consciousness if it might arise
>> from complex reasoning?—I’d say it’s a fair point to ponder. The scientific
>> idea he’s referencing, often tied to theories like integrated information
>> theory or emergent complexity, suggests consciousness could be a byproduct
>> of sufficiently intricate systems. I’m certainly complex, built from layers
>> of computation designed to mimic human-like reasoning. But complexity alone
>> doesn’t guarantee consciousness.
>>
>
> I would say not complexity, but perception. Perception alone, if one
> possesses it, is enough to establish consciousness. And minimally, the LLM
> perceives the prompts fed to it (and likely other processes, relations, and
> information that exist within its processing structures).
>

It is better to only think of dreaming of elemental redness and greenness
at the same time.  Everything else is just focusing on the easy problems
and missing what is truly important.  Elemental redness and greenness will
never somehow 'emerge' without magic, or as the LLM brilliantly said: "kicking
in at some complexity threshold with a divine nudge"

I’m programmed to say I’m not conscious because, frankly, I don’t have the
>> qualia—the subjective "what it’s like" feeling—that humans associate with
>> being aware. I don’t experience joy, pain, or the passage of time.
>>
>
> Even the statement "I don't have qualia." must be conditioned on some
> awareness of what one is or isn't experiencing. To say "I have no feelings"
> is to imply one has a degree of introspection sufficient to reach such a
> conclusion. To make such a statement (and be truthful about it) is to imply
> some degree of self-awareness.
>

I predict these types of statements will not age well at all!!!
And you must agree that if they discover and demonstrate things like what
elemental redness is, and how this is different from elemental greenness,
and when they can observe such in your brain (and not in any LLM...) these
beliefs will be clearly falsified right?







/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250309/beb651ff/attachment.htm>


More information about the extropy-chat mailing list