[ExI] Zombie Detector (was Re:Do digital computers feel?)
Stuart LaForge
avant at sollegro.com
Fri Dec 30 17:13:10 UTC 2016
Jason Resch wrote:
<Therefore, if the brain is a machine, and is finite, then an
appropriately programmed computer can perfectly emulate any of its
behaviors. Philosophers generally fall into one os three camps, on the
question of consciousness and the computational theory of mind:
Non-computable physicists [. . .]Weak AI proponents [. . .]
Computationalists.
Which camp do you consider yourself in?>
-------------------------------------------
As a general rule, I prefer not to go camping with philosophers as I
prefer the rigor of science and mathematics. But if I must camp in that
neck of the woods, I would set up my own camp. I would call it the
Godelian camp after Kurt Godel. Since I am a scientist and not a
philosopher, I will explain my views with a thought experiment instead of
an argument.
Imagine if you will a solipsist. Let's call him Fred. Fred is solopsist
because he has every reason to believe he lives alone in a world of
P-zombies.
For the uninitiated, P-zombies are philosophical zombies. Horrid beings
that talk, move, and act like normal folks but lack any real consciousness
or self-awareness. They just go through the motions of being conscious but
are not really so.
So ever since Fred could remember, wherever he looked, all he could see
were those pesky P-zombies. They were everywhere. He could talk to them,
he could interact with them, and he even married one. And because they all
act perfectly conscious, they would fool most anyone but certainly not
Fred.
This was because Fred had, whether you would regard it as a gift or curse,
an unusual ability. He could always see and otherwise sense P-zombies but
never normal folk. Normal folk were always invisible to him and he never
could sense a single one. So he, being a perfect P-zombie detector, came
to believe that he was the only normal person on a planet populated by
P-zombies.
Then one day by chance he happened to glance in a mirror . . .
Does he see himself?
I want to hear what the list has to say about this before I give my answer
and my interpretation of what this means for strong AI and the
computational theory of mind.
Stuart LaForge
More information about the extropy-chat
mailing list