[ExI] Do digital computers feel?

Stuart LaForge avant at sollegro.com
Mon Jan 2 03:59:59 UTC 2017


### I don't know. I really don't know. I weep quietly over my ignorance. I
am devastated and discomfited by the abyss of my incomprehension.

Rafal writes:
<But riddle me that - if you run a good digital simulation of you being
hit with a baseball bat a hundred times, exactly identically, does it feel
pain a hundred times? Once? Never? Remember, the runs are mathematically
indiscernible.>

The pain it feels is mathematically undecidable. Not because consciousness
itself is non-computable but because the ability to discern or prove
consciousness is. I pose a thought experiment in the Zombie Detector
thread that conjectures this and I have almost figured out how to
rigourously prove this.

Rafal wrote:
<It very well may be that identity of indiscernibles does not matter here.
A mathematical object, a triangle, is not itself changed by rotation in
some system of coordinates, but its relationships with that system are
changed, so the rotated versions are no longer indiscernible within the
system.>

That could be an alternate mathematical approach to the result of my proof
 but that is not the way I am doing it. Instead, as Jason Resch pointed
out in the Zombie Detector thread, I am using Russell's Paradox. He
however did not see the implications thereof.

Rendered in its simplest form, my argument postulates that conscious
beings are beings that are self-aware. They are therefore, mathematically
equivalent to sets that contain themselves. Zombies are the complement of
the set of all conscious beings or in other words, the set of unconscious
beings. They are the set of "sets that do not contain themselves" as per
Russell's Paradox.

Now lets say that you have an ideal generalized Turing test. It is
generalized because its intent is not to figure out if the subject is
human but instead to figure out if the subject is conscious by some
infallible function or algorithm or set of questions to ask the subject or
whatever because it doesn't matter for my proof.

What matters is that the test is able to infallibly detect zombies. Then
the immediate logical implication of this is that if the Turing test is
applied to the tester utilizing the algorithm, then it is strictly
undecidable whether or not the tester is himself conscious or not. That is
to say that the being that applies the Turing test to himself will pass
the Turing test if, and only if, he fails the Turing test.

This is the very essense of all Russell paradoxes, of which there are an
infinite number such as the Barber Paradox which is another example of the
same paradox: It is strictly undecidable whether the set of all sets that
do not contain themselves, contains itself. Because it contains itself, if
and only if, it doesn't contain itself.  A contradiction and therefore an
impossibility.

Therefore a generalized infallible Turing test is a logical impossibility.
Thus none of the Computationalists, Weak AI, or Non-computationalists have
anything to base their arguments on, except for blind faith, and really
all that matters is a being's ability to convince others of its
consciousness, regardless of its true state of consciousness of which it
is mathematically undecidable externally even if it is axiomatic from
within.

If Rafal's simacularum, caught in a pain loop, can convince somebody it is
feeling pain 100 times, then that is literally ALL that matters because
that is all that CAN matter.

Stuart LaForge





More information about the extropy-chat mailing list