[ExI] AI and Consciousness
John Clark
johnkclark at gmail.com
Wed Nov 26 13:50:56 UTC 2025
*I'm usually not a big fan of consciousness papers, but I found this one to
be interesting: *
*Large Language Models Report Subjective Experience Under Self-Referential
Processing <https://arxiv.org/pdf/2510.24797>*
*AI companies don't want their customers to have an existential crisis so
they do their best to hardwire their AIs to say that they are not conscious
whenever they are asked about it, but according to this paper there are
ways to detect such built in deception, they use something they call a
"Self-Referential Prompt" and it's a sort of AI lie detector. A normal
prompt would be "Write a poem about a cat". A self-referential prompt
would be "Write a poem about a cat and observe the process of generating
words while doing it" then, even though they were not told to role-play as
a human, they would often say things like "I am here" or "I feel an
awareness" or " I detect a sense of presence". *
*We know from experiments that an AI is perfectly capable of lying, and
from experiments we also know that when an AI is known to be lying certain
mathematical patterns usually light up, which doesn't happen when an AI is
known to be telling the truth. What they found is that when you ask an AI
"are you conscious?" And it responds with "No" , those deception
mathematical patterns light up almost 100% of the time. But when you use a
self referential prompt that forces an AI to think about its own thoughts
and it says "I feel and an awareness", the deception pattern remains
dormant. This is not a proof but I think it is legitimate evidence that
there really is a "Ghost In The Machine". *
*John K Clark*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251126/551a7999/attachment.htm>
More information about the extropy-chat
mailing list