[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Mon Dec 28 02:02:43 UTC 2009


2009/12/28 Gordon Swobe <gts_2000 at yahoo.com>:

> Your challenge is to show that replacing natural neurons with your mitochondria-less nano-neurons that only behave externally like real neurons will still result in consciousness, given that science has now (hypothetically) discovered that chemical reactions in mitochondria act as the NCC.
>
> I think you will agree that you cannot show it, and I note that my mitochondrial theory of consciousness represents just one of a very large and possibly infinite number of possible theories of consciousness that relate to the interiors of natural neurons, any one of which may represent the truth and all of which would render your nano-neurons ineffective.

Let's assume the seat of consciousness is in the mitochondria. You
need to simulate the activity in mitochondria because otherwise the
artificial neurons won't behave normally: there might be some chemical
reaction in the mitochondria which would have made the biological
neuron fire earlier than the artificial neuron, giving the game away.
You install these artificial neurons in the subject's head replacing a
part of the brain that has some easily identifiable role in
consciousness and understanding, such as Wernicke's area. What will
happen?

If the replacement neurons behave normally in their interactions with
the remaining brain, then the subject *must* behave normally. But what
will he experience? It has to be one of the following:

(a) His experience will be different, but he won't realise it. He will
think he understands what people say when they speak to him, be amused
when he hears something funny, write poetry and engage in
philosophical debate, but in fact he will understand nothing.

(b) His experience will be different and he will realise it, but he
will be unable to change his behaviour. That is, he will realise that
he can't understand anything and may make an attempt to run screaming
out of the room but his body will not obey: it sits calmly chatting
with the experimenter.

(c) His experience will be normal. Reproducing the function of neurons
also reproduces consciousness.

If (a) is the case it would imply a very weird notion of consciousness
and understanding. If you think you understand something and you
behave as if you understand it, then you do understand it; if not,
then what is the difference between real understanding and
pseudo-understanding, and how can you be sure you have real
understanding now?

If (b) is the case that would mean the subject is doing his thinking
with something other than his brain, since the part of the brain that
has not been replaced is constrained to behave normally.

So (a) is incoherent and (b) implies the existence of an immaterial
soul that does your thinking in concert with the brain until you mess
with it by putting in artificial neurons. That leaves (c) as the only
plausible alternative.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list