[ExI] Zombie Detector (was Re:Do digital computers feel?)

Brent Allsop brent.allsop at gmail.com
Fri Dec 30 22:15:44 UTC 2016


I, like most people, am a mere tetra chromate – I experience the world with
3 primary colors.  But some people are tetrachromats, and do it with 4
primary colors.  Let’s call this 4th color “grue”.  Obviously, all us tri
chromats can hear the person say things like: “No that is Grue, not one of
the primary colors, as you claim” and we can observe what is causing the 4th
primary color, including it’s neural correlate in their brains.  In other
words, like Frank Jackson’s brilliant color scientist raised in a black and
what room, us trichromats can learn everything about grue, and see that it
is not in our heads, but we can see when the neurarl correlate of grue is
in the head of a tetrachromat.



In other words, all of us normal trichromatic people are grue zombies.  We
can know and communicate everything about them.  In fact, we might even be
able to be trained to call the right things grue, just like the
tetrachromat does, and lie about it, and convince everyone else that we
might be a tetrachromat.  (until you observe my brain)  So, until we
enhance our primary visual cortext and give it what has the grue color, we
will never know how the tetrachromat qualitatively interprets the word
“grue”.



Now, some people think of a “p-zombie” as something that is atomically
identical to us, but just doesn’t have the qualitative experience of
consciousness – which of course is very absurd, and very different than the
grue type of zombie, I am, who simply isn’t yet capable of producing the
grue neural correlate in my brain.  But I can represent grue with anything
else that is in my brain, and talk about it as if it was grue, in a grue
zombie way.



On Fri, Dec 30, 2016 at 12:30 PM, Jason Resch <jasonresch at gmail.com> wrote:

> Reminds me a bit of "An Unfortunate Dualist":
>
> http://themindi.blogspot.com/2007/02/chapter-23-unfortunate-dualist.html
>
> As to your puzzle, if Fred is unable to detect any effects from conscious
> people (including their reflections), then he should not  be able to see
> his own reflection, but then he also shouldn't be able to hear his own
> thoughts either. Which might be your definition of a zombie, making him
> visible, etc. "Russell's reflection". However, Fred's own voice might still
> be heard if Fred's consciousness is an epiphenomenon, but I think
> practically speaking I think epiphenomenalism can be ruled out, together
> with the notion of p-zombies.
>
> See Daniel Dennett's "The Unimagined Preposterousness of Zombies":
> https://ase.tufts.edu/cogstud/dennett/papers/unzombie.htm
>
> Dennett argues that "when philosophers claim that zombies are conceivable,
> they invariably underestimate the task of conception (or imagination), and
> end up imagining something that violates their own definition".[3]
> <https://en.wikipedia.org/wiki/Philosophical_zombie#cite_note-Dennett1991-3>
> [4]
> <https://en.wikipedia.org/wiki/Philosophical_zombie#cite_note-Dennett1995-4> He
> coined the term "zimboes" – p-zombies that have second-order beliefs
> <https://en.wikipedia.org/wiki/Second-order_logic> – to argue that the
> idea of a p-zombie is incoherent;[12]
> <https://en.wikipedia.org/wiki/Philosophical_zombie#cite_note-12> "Zimboes
> thinkZ they are conscious, thinkZ they have qualia, thinkZ they suffer
> pains – they are just 'wrong' (according to this lamentable tradition), in
> ways that neither they nor we could ever discover!".[4]
> <https://en.wikipedia.org/wiki/Philosophical_zombie#cite_note-Dennett1995-4>
>
>
>
> I'm not sure, however, whether your thought experiment sheds any new light
> on the concepts of consciousness or zombies. It seems like it may be only a
> reformulation of the "Barber Paradox", where the self reflexivity is a
> "power to detect only non-consciousness things", aimed at one's own
> consciousness.
>
> Jason
>
> On Fri, Dec 30, 2016 at 11:13 AM, Stuart LaForge <avant at sollegro.com>
> wrote:
>
>> Jason Resch wrote:
>> <Therefore, if the brain is a machine, and is finite, then an
>> appropriately programmed computer can perfectly emulate any of its
>> behaviors. Philosophers generally fall into one os three camps, on the
>> question of consciousness and the computational theory of mind:
>> Non-computable physicists [. . .]Weak AI proponents [. . .]
>> Computationalists.
>>
>> Which camp do you consider yourself in?>
>> -------------------------------------------
>>
>> As a general rule, I prefer not to go camping with philosophers as I
>> prefer the rigor of science and mathematics. But if I must camp in that
>> neck of the woods, I would set up my own camp. I would call it the
>> Godelian camp after Kurt Godel. Since I am a scientist and not a
>> philosopher, I will explain my views with a thought experiment instead of
>> an argument.
>>
>> Imagine if you will a solipsist. Let's call him Fred. Fred is solopsist
>> because he has every reason to believe he lives alone in a world of
>> P-zombies.
>>
>> For the uninitiated, P-zombies are philosophical zombies. Horrid beings
>> that talk, move, and act like normal folks but lack any real consciousness
>> or self-awareness. They just go through the motions of being conscious but
>> are not really so.
>>
>> So ever since Fred could remember, wherever he looked, all he could see
>> were those pesky P-zombies. They were everywhere. He could talk to them,
>> he could interact with them, and he even married one. And because they all
>> act perfectly conscious, they would fool most anyone but certainly not
>> Fred.
>>
>> This was because Fred had, whether you would regard it as a gift or curse,
>> an unusual ability. He could always see and otherwise sense P-zombies but
>> never normal folk. Normal folk were always invisible to him and he never
>> could sense a single one. So he, being a perfect P-zombie detector, came
>> to believe that he was the only normal person on a planet populated by
>> P-zombies.
>>
>> Then one day by chance he happened to glance in a mirror . . .
>>
>> Does he see himself?
>>
>> I want to hear what the list has to say about this before I give my answer
>> and my interpretation of what this means for strong AI and the
>> computational theory of mind.
>>
>> Stuart LaForge
>>
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20161230/ad924412/attachment.html>


More information about the extropy-chat mailing list