[ExI] The digital nature of brains (was: digital simulations)

Stathis Papaioannou stathisp at gmail.com
Mon Feb 1 10:04:03 UTC 2010


On 1 February 2010 06:22, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> --- On Sun, 1/31/10, Eric Messick <eric at m056832107.syzygy.com> wrote:

>> This was the start of a series of posts where you said that
>> someone with a brain that had been partially replaced with
>> programmatic neurons would behave as though he was at least partially
>> not conscious.  You claimed that the surgeon would have to
>> replace more and more of the brain until he behaved as though he was
>> conscious, but had been zombified by extensive replacement.
>
> Right, and Stathis' subject will eventually pass the TT just as your subject will in your thought experiment. But in both cases the TT will give false positives. The subjects will have no real first-person conscious intentional states.

I think you have tried very hard to avoid discussing this rather
simple thought experiment. It has one premise, call it P:

P: It is possible to make artificial neurons which behave like normal
neurons in every way, but lack consciousness.

That's it! Now, when I ask if P is true you have to answer "Yes" or
"No". Is P true?

OK, assuming P is true, what happens to a person's behaviour and to
his experiences if the neurons in a part of his brain with an
important role in consciousness are replaced with these artificial
neurons?

I'll answer (a) for you: his behaviour must remain unchanged. It must
remain unchanged because the artificial neurons behave in a perfectly
normal way in their interactions with normal neurons, sensory organs
and effector organs, according to P. If they don't, then P is false,
and you said that P is true. Can you see a way that I haven't seen
whereby it might *not* be a contradiction to claim that the person's
neurons will behave normally but the person will behave differently?

OK, the person's behaviour remains unchanged, by definition if P is
true. What about his experiences? The classic example here is visual
perception. If P is true, then the person would go blind; but if P is
true, he is also forced to behave as if he has normal vision. So
internally, either he must not notice that he is blind, or he must
notice that he is blind but be unable to communicate it. The latter is
impossible for the same reasons as it is impossible that his behaviour
changes: the neurons in his brain which do the thinking are also
constrained to behave normally. That leaves the first option, that he
goes blind but doesn't notice. If this idea is coherent to you, then
you have to admit that you might right now be blind and not know it.
However, you have clearly stated that you think this is preposterous:
a zombie doesn't know it's a zombie, but you know you're not a zombie,
and you would certainly know if you suddenly went blind (as a matter
of fact, some people *don't* recognise when they go blind - it's
called Anton's syndrome - but these people also behave abnormally, so
they aren't zombies or partial zombies).

Where does that leave you? I think you have to say you were mistaken
in saying P is true. It isn't possible to make artificial neurons
which behave like normal neurons in every way but lack consciousness.
Can you see another way out that I haven't seen?


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list