[ExI] Semiotics and Computability
Stathis Papaioannou
stathisp at gmail.com
Mon Feb 8 10:59:02 UTC 2010
On 8 February 2010 08:05, Spencer Campbell <lacertilian at gmail.com> wrote:
> Seems pretty clear to me, as a neuron-by-neuron replacement is
> precisely what I've wanted for the past two to five years. I would
> advise phrasing it again, simply and concisely, because (a) what you
> have in mind may have changed since you last did so, (b) I might have
> overwritten your description with my own, and (c) the point on which
> Gordon disagrees remains a total mystery.
The premise is that it is possible to make an artificial neuron which
behaves exactly the same as a biological neuron, but lacks
consciousness. We've been discussing computer consciousness but it
doesn't have to be a computerised neuron, we could say that it is
animated by a little demon and the conclusion from the thought
experiment remains unchanged. These zombie neurons are then put into
your head replacing normal neurons that play some important role in a
conscious process, such as visual perception or understanding of
language.
Before going further, is it perfectly clear that the behaviour of the
remaining biological parts of your brain and your overall behaviour
will remain unchanged? If not, then the artificial neuron does not
work as claimed.
OK, so your behaviour is unchanged and your thoughts are unchanged as
a result of the substitution; for if your thoughts changed, you would
be able to say "my thoughts have changed", and therefore your
behaviour would have changed. What of your consciousness? If your
consciousness changes as a result of the substitution, you would be
unable to notice a change, since again if you noticed the change you
would be able to say "I've noticed a change", and that would be a
change in your behaviour, which is impossible.
So: if your consciousness changes as a result of the substitution, you
would be unable to notice any change. You would lose all visual
perception and not only behave as if you had normal vision, but also
honestly believe that you had normal vision. Or you would lose the
ability to understand words starting with "r" but you would be able to
use these words appropriately and honestly believe that you understood
what these words meant. You would be partially zombified but you
wouldn't know it. In which case, how do you know you don't now have
zombie vision, zombie understanding or a zombie toothache? If zombie
consciousness is indistinguishable objectively *or* subjectively (i.e.
by the unzombified part of a partly zombified mind) from real
consciousness, then the claim that there is nevertheless a distinction
is meaningless.
The conclusion, therefore, is that the original premise is false: it
is not possible to make an artificial neuron which behaves exactly the
same as a biological neuron, but lacks consciousness. Either such a
neuron would not really behave like a biological neuron, or it would
behave like a biological neuron and also have the consciousness
inherent in a biological neuron. This is a statement of the
functionalist position, of which computationalism is a subset. It is
possible that computationalism is false but functionalism is still
true.
Note that the above argument assumes no theory of consciousness. Its
conclusion is just that if consciousness exists at all, whatever it
is, it is ineluctably linked to brain function.
--
Stathis Papaioannou
More information about the extropy-chat
mailing list