[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Tue Jan 5 14:39:27 UTC 2010


2010/1/6 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Tue, 1/5/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> But you claim that it is possible to make p-neurons which
>> function like normal neurons but, being computerised, lack the NCC,
>> and putting these neurons into region A as replacements will not cause
>> the patient to fall to the ground unconscious.
>
> No, I make no such claim. Cram's surgeon will no doubt find a way to keep the man walking, even if semantically brain-dead from the effective lobotomization of his Wernicke's and related.

Well, Searle makes this claim. He says explicitly that the behaviour
of a brain can be simulated by a computer, and invokes Church's thesis
in support of this. However, he claims the simulated brain won't have
consciousness, and will result in a philosophical zombie. Perhaps
there is some confusion because Searle is talking about simulating a
whole brain, not a neuron, but if you can make a zombie brain it
should certainly be possible to make a zombie neuron. That's what a
p-neuron is: it acts just like a b-neuron, the b-neurons around it
think it's a b-neuron, but because it's computerised, you claim, it
lacks the essentials for consciousness. By definition, if the
p-neurons function as advertised they can be swapped for the
equivalent b-neuron and the person will behave exactly the same and
honestly believe that nothing has changed.

If you *don't* believe p-neurons like this are possible then you
disagree with Searle. Instead, you believe that there is some aspect
of brain physics that is uncomputable, and therefore that weak AI and
philosophical zombies may not be possible. This is a logically
consistent position, while Searle's is not. However, there is no
scientific evidence that the brain uses uncomputable physics.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list