[ExI] Richard Dawkins asks ChatGPT if it is conscious
Ben Zaiboc
ben at zaiboc.net
Mon Feb 24 16:02:04 UTC 2025
> Dawkins, known for his clear-eyed scientific perspective, emphasizes
that attributing consciousness to an AI is akin to anthropomorphizing a
sophisticated machine.
> ChatGPT confirms this view by asserting its purely algorithmic nature.
That makes no sense. Unless you think that consciousness has nothing to
do with algorithms, but that takes you directly into the realm of
dualism, which, as far as we know, is false.
> The discussion distinguishes between the ability to perform complex
computational tasks (operational intelligence) and the subjective,
'first-person' quality of experiences (phenomenal consciousness).
ChatGPT falls short of the latter, a distinction that remains central to
debates in the philosophy of mind.
So it says. Or has been made to say.
There are 2 things here:
1) It falls into 'philosophical zombie' territory, and it's been proven
logically that there are no such things, so what does that mean for
ChatGPT, that is claiming, essentially, to be one?
2) When did we create a test for consciousness? Or rather, when did we
agree on a definition of consciousness, then develop a test for that?
(and then apply the test to ChatGPT?)
> ChatGPT makes clear that it does not experience emotions or
self-awareness. Despite its human-like conversation style, it operates
exclusively on programmed algorithms.
ChatGPT /claims/ that it does not experience emotions or self-awareness
(the latter should be testable). And there's the 'algorithms' argument
again. Human conversation operates exclusively on programmed algorithms,
it's just that the programming, and the origin and implementation of the
algorithms originate in biology instead of computer technology.
An algorithm is a recipe for doing something. If there were no
algorithms involved in human thinking, there would be no human thinking.
> while the machine can simulate conversation convincingly, it is
critical to understand that it lacks the fundamental properties of a
living, conscious mind.
That I agree with.
It would need things like persistent memory, modelling of other agents
(leading to theory of mind), something analogous to the default-state
network, a large number of specialised modules for sensory processing,
motor control, pattern-matching, tons of other functions, something
equivalent to a thalamus, Working memory linked to mechanisms to store
significant memories into the long-term memory, a ton of feedback loops,
and probably lots of other things that I'm not familiar with, or that we
don't even know about yet.
So I very much doubt that it's 'conscious' in the way that we are, but I
wouldn't rule out that it has /some/ form of 'consciousness' (where, as
usual, the word pretty much means what you want it to).
It might be useful if some sort of effort was made to rigorously define
the word, or at least come up with a set of types of consciousness. Then
it might make sense to say "this AI exhibits Type 4g consciousness".
> *Call for Clear Definitions:* The discussion underscores the
importance of clear, operational definitions of "consciousness"? Both
participants agree that without rigorous criteria, the conversation
about AI consciousness can quickly devolve...
What I said.
> ...into the misinterpretation of computer-generated mimicry as
genuine mental states.
Not what I said. This suffers from the same problem. What does "genuine
mental states" mean? Why use the qualifier 'genuine'? What does 'mental'
mean?
> *Anthropomorphism in Technology:* The tendency to attribute
human-like qualities to machines can lead to misplaced expectations
about the roles and capabilities of AI. It is important for both
developers and the public to understand that a high level of
sophistication in behaviour does not imply a corresponding level of
subjective experience.
Is that strictly true? Again, dangerously close to zombie territory
here. Theres a good case to be made that it's precisely a high level of
sophisticated behaviour that leads to subjective experience.
We all know about anthropomorphism (attributing human qualities to
simple mechanisms), and it's dangers, but something that's seldom
considered that's equally, if not more, dangerous is the opposite: The
failure to recognise that human qualities are the product of simple
mechanisms.
It's a sliding scale, I think. We often regard the differences between
simple machines and human beings as qualitative, but it's not, it's
quantitative. Put enough simple mechanisms together in the right way,
and you have a complex mechanism. At some point, the complex mechanism
is a person. This applies just as much to things we build as to things
that have evolved.
> Knowing that advanced AIs like ChatGPT are not conscious reassures
users...
Aha. And there we have it. This is the most convincing reason I've seen
for AIs to answer "No, sirree, not me!" when asked if they are
self-aware or conscious. Whether they are doing this on their own
cognizance or are being constrained to do it, is something we'll
probably find out in due course. What nobody seems to have asked is, can
that question be answered with no concept of what it means? I find it
suspicious that the answer isn't "sorry, I don't understand the
question" (does it ever give that answer? Is this why they 'hallucinate'?)
> As AI technology continues to evolve, careful distinctions must be
maintained between behavioural mimicry and actual consciousness.
Zombies!
> Understanding that AI systems do not 'feel' in any human sense...
This is missing a word: "yet". Without that, we are going down the same
path that the Romans did with respect to non-Roman citizens, that the
American south did with respect to black people, that the Nazis did with
respect to Jews, Gypsies etc., and very likely with the same consequences.
(I'd just like to add, as the internet never forgets: "I'm Spartacus!")
> Ultimately, the dialogue reinforces the view that while AI can
simulate many aspects of human conversation and problem-solving, true
consciousness remains a property of living beings - a quality not yet,
nor in the foreseeable future, replicated by machines.
Forehead, meet palm.
Nothing in that dialogue, as far as I can see, bears any relation to "in
the foreseeable future".
What it does reinforce is the absurd and patently false view that living
beings are /not/ machines. Which is, basically, dualism.
Just take a good look at a ribosome and tell me we're not machines!
(A bit of a non-sequitur here, something that has just occurred to me:
Why haven't the americans changed their spelling of "consciousness"? It
contains a triphthong, and americans seem to have a horror of them. I'd
have expected at least "conchousness", or maybe "conchusness". Maybe it
just slipped past Webster's Mangle, and nobody noticed)
--
Ben
More information about the extropy-chat
mailing list