[ExI] Zombies are logically inconsistent: a proof

Jason Resch jasonresch at gmail.com
Wed May 17 00:52:55 UTC 2023


On Tue, May 16, 2023 at 7:09 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Tue, May 16, 2023 at 4:39 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Tue, May 16, 2023, 6:34 PM Adrian Tymes via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> but no, it doesn't seem to lead to a contradiction.
>>>
>>
>> The contradiction comes in after the fourth corrolary of this assumption
>>
>
> Which is never reached, as the chain breaks at - depending on how you
> count it - the second or third corollary.
>

It wasn't a chain, all were based on the assumption that there is no
behavior which implies consciousness.



>
>
>> Corrolary 1. Talking about one's innermost desires, thoughts, feelings,
>>>> sensations, emotions, beliefs, does not require consciousness.
>>>>
>>>
>>> Nor does it require actually having desires, thoughts, feelings, and so
>>> on.  Sociopaths readily lie about their feelings, so LLM AIs could too.
>>>
>>
>> But lying would involve different pathways and patterns in the brain,
>> which would be objectively detectable.
>>
>
> Does it?  I have not heard of this in sociopaths.
>

Psychopaths are often found to have voids in certain brain areas.


>
> Besides, the metaphor is to LLMs, which don't have the same sort of brains.
>

I didn't intend for my argument to be used to prove LLMs are conscious,
only that zombies are logically inconsistent.

If, however, you accept there exist behaviors that are not possible without
consciousness, this could be used to test for the consciousness of LLMs.


>
>
>> For that matter, in practice this would at best be, "...nor any other
>>> person that they meet could ever...".  Those who claim to know that LLMs
>>> are not conscious grant there could exist some p-zombies, such as LLMs, who
>>> never meet anyone who knows they are not conscious.
>>>
>>
>> But those who believe in the possibility of zombies (at least in
>> physically identical ones) can never have a justification to conclude other
>> humans they run into are not zombies.
>>
>
> Correct.  Which leads to how to defeat such thinking.
>

It's not really a path to a defeat as I see it. They could simply retreat
to accepting solipsism as a logical possibility. Or simply say there is no
answer to the problem of other minds, that other minds are neither provable
nor disprovable.


>
>
>> But there do exist people who claim to know the difference.  That is many
>>> of the very people who claim they can tell that LLMs are not conscious.
>>>
>>
>> We can never disprove the presence of a mind (for if we are in a
>> simulation or game world, any object might be "ensouled", or exist in a
>> disembodied invisible form), but I think we can prove to, some level of
>> confidence, the presence of a mind when we see behavioral evidence of
>> reactivity to change indicating an awareness or sense of some environmental
>> variable.
>>
>
> And what of those who arbitrarily set required levels for whatever
> scenario (AIs or certain types of humans) they want to "prove" to lack
> consciousness?
>

I think it can be shown that:
1. One can never prove the absence of a mind. (as for all I know the
mousepad on my desk could be an object that some player in the simulation
chose to exist as and it could have a mind in it, despite there being no
objective test I could perform to detect that mind from my present vantage
point within the simulation).
2. It is possible to establish a certain confidence in the existence of a
mind, via an iterated testing, each passing of which increases one's
confidence in the presence of a mind behind the observed behavior (as
opposed to the behavior being driven by some random process, which we can
never rule out to 100% confidence).

Those who set arbitrarily difficult tests for consciousness, I think, can
be shown to not have any clear definition of what they mean by
consciousness. If they did, then there could be a clear test for it (or
there would be no possible test, and the inability to test for
consciousness would apply equally to humans, which is also a non-starter as
far as useful theories go).


>
>
>> Corrolary 3. The information indicating the fact that one person is a
>>>> zombie while another is not would have to stand outside the physical
>>>> universe, but where then is this information held?
>>>>
>>>
>>> If this information exists and is measurable within some subjective
>>> realities, and it is provably consistent, then the information upon which
>>> this was based (regardless of whether the measurement is correct) lies
>>> inside the physical universe.
>>>
>>
>> If there is no behavior that is required for consciousness, then how can
>> anyone establish that one entity is conscious and another entity is not?
>> There would be no possible test, as no possible behavior could be tested
>> for.
>>
>
> Those who profess this belief claim it is obvious to them even if they
> can't quite put their methodology into words.
>
> (The truth is that they are lying to themselves, first and foremost.  They
> don't actually have a methodology other than determining whether or not the
> subject is a member of the group that they have a priori declared to be
> unconscious.)
>

The above is why I think it is important to develop an ironclad logical
proof of the impossibility of zombies. We can then point to those who
believe in zombies to find the flaw in the argument, or otherwise, remain
silent and accept the consequences of the argument: zombies are impossible,
and therefore, a perfect simulation of a human brain, irrespective of the
substrate, must be equivalently conscious.


>
>
>> That's how those who hold  this view reason, anyway.  One key problem is
>>> that "it is provably consistent" notion.  They think it is, but when put to
>>> rigorous experiment this belief turns out to be false: without knowing
>>> who's an AI and who's human, if presented with good quality chatbots, they
>>> are often unable to tell.  That's part of the point of the Turing test.
>>>
>>> I know, I keep using the history of slavery as a comparison, but it is
>>> informative here.  Many people used to say the same thing about black folks
>>> - that they weren't really fully human, basically what we today mean by
>>> supposing all AIs are and can only be zombies - but these same tests gave
>>> the lie to that.  Not all AIs are conscious, of course, but look at how
>>> this academic problem was solved before to see what it might take to settle
>>> it now.
>>>
>>
>> Are you referring to the Turing test (a test of another's intelligence)
>>
>
> And things similar to the Turing Test, yes, though I had not before heard
> of it as establishing intelligence, but merely likelihood to be human or to
> be AI based on conversational skills.
>

The original test proposed that two contestants, one human male and one
computer, would both pretend to be a human female, hence it was called the
imitation game. If judges could not reliably distinguish human contestants
from machine contestants (say they had no better than 50% probability of
guessing correctly) then the machine would have to have at least a
human-level of intelligence, in order to simulate the human male pretending
to be a human female well enough that correctly predicting who one was
conversing with (the man or the machine) was no better than a coin flip.

The tests as performed today, are generally not conducted this way, and I
think they lose some of the original genius behind the test. A machine that
reliably was guessed to be a man much more than 50% I think should be
considered a test failure, as it suggests it is using some kind of trick
that makes human judges fall for it, rather than approximating an actual
human so accurately, that no better (or worse) than 50% odds are possible,
for any judge.


>
> But that is not what I refer to by the history of slavery.  Alan Turing
> was not even born yet during the latter 19th century when such arguments
> were largely put to rest.
>

It is a strong argument against carbon (or neural) chauvinism, but I think
a proof is what's needed. Many people have no problem being prejudicial
against machines.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230516/c84b663f/attachment.htm>


More information about the extropy-chat mailing list