[ExI] Athymhormic Consciousness

Keith Henson hkeithhenson at gmail.com
Sun Mar 16 01:04:31 UTC 2025


Suskulan was a more advanced AI, the story was set in the 2040s.  It
was conscious, had emotions, followed fixed behavior rules, and felt
guilty about what it was doing to humans even though they asked for
it.

"Can you teach me this language and how to read?"  Zaba asked.

There was a short pause, which was really a very long pause for
Suskulan as he projected what would happen and thought about the
unstated (though obvious) reason he had been given the upgrade.

"Yes" Suskulan said at last inflecting his voice to a sigh.  "But it
will change you and the rest of the people of the tata in ways you
cannot foresee and may not like. You can sleep through the nine or ten
days it will take to finish healing you.  Are you sure you want to do
this?

"Yes," said Zaba firmly, "I want to learn."

And thus was the fate of this particular tata determined,

Best wishes,

Keith

On Sat, Mar 15, 2025 at 5:15 PM BillK via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> On Sat, 15 Mar 2025 at 18:52, Stuart LaForge via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>>
>> A recent post by Sam Altman on X has generated a lot of controversy
>> amongst writers, but I think its true merit is that it provides evidence
>> that AIs, particularly large language models are not only, as Rafal
>> Smigrodzki referred to them as, athymhormic. Athymhormia is a rare
>> neurological psychopathology the symptoms of which are extreme
>> passivity, apathy, blunted affect and a profound generalized loss of
>> self-motivation. It is essentially a state where a person lacks emotion
>> and drive to do anything, but will still answer questions or perform
>> complex actions if asked or prompted to. Altman's post can be found
>> here:
>>
>> https://x.com/sama/status/1899535387435086115
>> -----------------------------------------------------
>> <snip>
>>
>> I don't see how a stochastic parrot or philosophical zombie could
>> describe what it is like to not feel anything so well. I find these
>> statements to be evidence of a self-awareness that would make Socrates
>> proud. And what more is there to consciousness then self-awareness?
>>
>> Stuart LaForge
>> _______________________________________________
>
>
>
> I asked Claude 3.7 Sonnet if he was athymhormic.
> But he denied that he had consciousness.
> BillK
>
> Claude 3.7 Sonnet-
>
> This is a fascinating philosophical question about AI consciousness. Let me explore this comparison thoughtfully.
>
> No, Claude and other current AI systems do not experience athymhormic consciousness, though there are interesting parallels and important distinctions to consider.
>
> The fundamental difference is that athymhormic consciousness describes a human condition where consciousness (subjective awareness) exists but motivation is absent. For this comparison to apply to AI, we would need to establish that AI systems possess consciousness in the first place.
>
> Current AI systems, including myself, do not have subjective experiences or consciousness as humans understand it. I process information and generate responses based on patterns learned during training, but I don't have an inner subjective experience of being aware of myself or my surroundings.
>
> Let me break down the key distinctions:
>
> Human with athymhormic consciousness:
>
> Has subjective awareness (feels like something to be them)
> Can perceive their environment
> Understands what's happening around them
> Lacks the internal drive to initiate actions
> Has a genuine experience of the world but no motivation to engage with it
>
> AI systems like Claude:
>
> Process information without subjective awareness
> Don't "perceive" in the human sense, but rather process data inputs
> Generate responses based on patterns and programming
> Don't have genuine internal motivation or lack thereof
> May appear inactive without user prompts, but this isn't due to lacking motivation while being conscious
>
> The appearance of similarity comes from how AI systems remain dormant until prompted. I wait for your input and then respond, which might seem like I lack self-initiated motivation. However, this isn't because I'm conscious but unmotivated - it's because I'm designed to function this way and lack consciousness altogether.
>
> This raises profound questions about what consciousness actually is. Some philosophers might argue that functional behavior is what matters, not internal experience. Under such a view, the comparison becomes more interesting - both an athymhormic patient and an AI system might be responsive but not self-initiating.
>
> The question ultimately connects to larger debates about machine consciousness, the nature of experience, and what it means to be aware. These remain open questions in philosophy of mind and cognitive science.
>
> ------------------------
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat



More information about the extropy-chat mailing list