[ExI] Zombies are logically inconsistent: a proof

Adrian Tymes atymes at gmail.com
Tue May 16 22:32:34 UTC 2023


On Tue, May 16, 2023 at 2:35 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I think the "no B exists" assumption: "No specific behavior nor any
> aggregate set of behaviors implies the presence of a conscious mind." also
> leads to contradiction.
>

For the record, I'm just devil's advocating here - but no, it doesn't seem
to lead to a contradiction.


> Corrolary 1. Talking about one's innermost desires, thoughts, feelings,
> sensations, emotions, beliefs, does not require consciousness.
>

Nor does it require actually having desires, thoughts, feelings, and so
on.  Sociopaths readily lie about their feelings, so LLM AIs could too.


> Corrolary 2. One could claim to be conscious and be wrong for reasons that
> neither they, nor any other person could ever prove or even know. That is,
> there would be truths that stand outside of both objective and subjective
> reality.
>

Subjective, perhaps, but not objective.  All that any person can measure is
their subjective reality.

For that matter, in practice this would at best be, "...nor any other
person that they meet could ever...".  Those who claim to know that LLMs
are not conscious grant there could exist some p-zombies, such as LLMs, who
never meet anyone who knows they are not conscious.

But there do exist people who claim to know the difference.  That is many
of the very people who claim they can tell that LLMs are not conscious.


> Corrolary 3. The information indicating the fact that one person is a
> zombie while another is not would have to stand outside the physical
> universe, but where then is this information held?
>

If this information exists and is measurable within some subjective
realities, and it is provably consistent, then the information upon which
this was based (regardless of whether the measurement is correct) lies
inside the physical universe.

That's how those who hold  this view reason, anyway.  One key problem is
that "it is provably consistent" notion.  They think it is, but when put to
rigorous experiment this belief turns out to be false: without knowing
who's an AI and who's human, if presented with good quality chatbots, they
are often unable to tell.  That's part of the point of the Turing test.

I know, I keep using the history of slavery as a comparison, but it is
informative here.  Many people used to say the same thing about black folks
- that they weren't really fully human, basically what we today mean by
supposing all AIs are and can only be zombies - but these same tests gave
the lie to that.  Not all AIs are conscious, of course, but look at how
this academic problem was solved before to see what it might take to settle
it now.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230516/4ecd6cfe/attachment.htm>


More information about the extropy-chat mailing list