[ExI] Zombies are logically inconsistent: a proof

Adrian Tymes atymes at gmail.com
Wed May 17 00:08:02 UTC 2023


On Tue, May 16, 2023 at 4:39 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Tue, May 16, 2023, 6:34 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> but no, it doesn't seem to lead to a contradiction.
>>
>
> The contradiction comes in after the fourth corrolary of this assumption
>

Which is never reached, as the chain breaks at - depending on how you count
it - the second or third corollary.


> Corrolary 1. Talking about one's innermost desires, thoughts, feelings,
>>> sensations, emotions, beliefs, does not require consciousness.
>>>
>>
>> Nor does it require actually having desires, thoughts, feelings, and so
>> on.  Sociopaths readily lie about their feelings, so LLM AIs could too.
>>
>
> But lying would involve different pathways and patterns in the brain,
> which would be objectively detectable.
>

Does it?  I have not heard of this in sociopaths.

Besides, the metaphor is to LLMs, which don't have the same sort of brains.


> For that matter, in practice this would at best be, "...nor any other
>> person that they meet could ever...".  Those who claim to know that LLMs
>> are not conscious grant there could exist some p-zombies, such as LLMs, who
>> never meet anyone who knows they are not conscious.
>>
>
> But those who believe in the possibility of zombies (at least in
> physically identical ones) can never have a justification to conclude other
> humans they run into are not zombies.
>

Correct.  Which leads to how to defeat such thinking.


> But there do exist people who claim to know the difference.  That is many
>> of the very people who claim they can tell that LLMs are not conscious.
>>
>
> We can never disprove the presence of a mind (for if we are in a
> simulation or game world, any object might be "ensouled", or exist in a
> disembodied invisible form), but I think we can prove to, some level of
> confidence, the presence of a mind when we see behavioral evidence of
> reactivity to change indicating an awareness or sense of some environmental
> variable.
>

And what of those who arbitrarily set required levels for whatever scenario
(AIs or certain types of humans) they want to "prove" to lack consciousness?


> Corrolary 3. The information indicating the fact that one person is a
>>> zombie while another is not would have to stand outside the physical
>>> universe, but where then is this information held?
>>>
>>
>> If this information exists and is measurable within some subjective
>> realities, and it is provably consistent, then the information upon which
>> this was based (regardless of whether the measurement is correct) lies
>> inside the physical universe.
>>
>
> If there is no behavior that is required for consciousness, then how can
> anyone establish that one entity is conscious and another entity is not?
> There would be no possible test, as no possible behavior could be tested
> for.
>

Those who profess this belief claim it is obvious to them even if they
can't quite put their methodology into words.

(The truth is that they are lying to themselves, first and foremost.  They
don't actually have a methodology other than determining whether or not the
subject is a member of the group that they have a priori declared to be
unconscious.)


> That's how those who hold  this view reason, anyway.  One key problem is
>> that "it is provably consistent" notion.  They think it is, but when put to
>> rigorous experiment this belief turns out to be false: without knowing
>> who's an AI and who's human, if presented with good quality chatbots, they
>> are often unable to tell.  That's part of the point of the Turing test.
>>
>> I know, I keep using the history of slavery as a comparison, but it is
>> informative here.  Many people used to say the same thing about black folks
>> - that they weren't really fully human, basically what we today mean by
>> supposing all AIs are and can only be zombies - but these same tests gave
>> the lie to that.  Not all AIs are conscious, of course, but look at how
>> this academic problem was solved before to see what it might take to settle
>> it now.
>>
>
> Are you referring to the Turing test (a test of another's intelligence)
>

And things similar to the Turing Test, yes, though I had not before heard
of it as establishing intelligence, but merely likelihood to be human or to
be AI based on conversational skills.

But that is not what I refer to by the history of slavery.  Alan Turing was
not even born yet during the latter 19th century when such arguments were
largely put to rest.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230516/e8ec5bf9/attachment.htm>


More information about the extropy-chat mailing list