[ExI] Meaningless Symbols.

Stathis Papaioannou stathisp at gmail.com
Sun Jan 17 01:14:00 UTC 2010

2010/1/17 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Fri, 1/15/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>> The drugs can do this only by affecting the behaviour of
>> neurons. What you claim is that it is possible to make a physical
>> change to a neuron which leaves its behaviour unchanged but changes or
>> eliminates the person's consciousness.
> You keep assigning absolute atomic status to neurons and their behaviors, forgetting that just as the brain is made of neurons, neurons are made of objects too. Those intra-neuronal objects have as much right to claim atomic status as does the neuron, and larger inter-neuronal structures can also make that claim.

Everything I've said applies equally well if you consider simulating
the behaviour of subneuronal or multineuronal structures. Neurons are
just a convenient unit to work with.

> And on top of that you assume that digital simulations of whatever structures you arbitrarily designate as atomic will in fact work exactly like the supposed atomic structures you hope to simulate -- which presupposes that the brain in actual fact exists as a digital computer.

It presupposes that the brain's processes can be described
algorithmically. This is probably true but not certainly true. If it
is true, then it is possible to make a computerised brain that acts
exactly like a biological brain and has exactly the same consciousness
as the biological brain. As I've explained several times, to deny this
last statement leads to self-contradiction. I think you have
understood this because as Eric has also pointed out, in the partial
brain replacement thought experiment you claim that the patient
*won't* behave normally and the surgeon will have to tweak the rest of
his brain to make him pass as normal. But of course that is saying
that the artificial neurons were not zombie neurons to begin with,
since a zombie neuron by definition behaves exactly the same as a
biological neuron. So if you want to maintain that computers can't be
conscious you are forced to agree that the brain is not computable,
and hence that zombies and weak AI are not possible.

I should add that while this thought experiment does not prove
computationalism (since there is the possibility that the brain is not
computable) it does prove functionalism, of which computationalism is
a subset. That is, it proves that you cannot separate consciousness
from intelligent behaviour, or equivalently that consciousness cannot
be due to some essential substance or process in the brain. For
otherwise zombified brain components would be conceptually possible,
leading as before to logical contradiction.

Stathis Papaioannou

More information about the extropy-chat mailing list