[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Wed Jan 6 14:32:05 UTC 2010


2010/1/6 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Tue, 1/5/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>>> No, I make no such claim. Cram's surgeon will no doubt
>>> find a way to keep the man walking, even if semantically
>>> brain-dead from the effective lobotomization of his
>>> Wernicke's and related.
>>
>> Well, Searle makes this claim.
>
> I don't think Searle ever considered a thought experiment exactly like the one we created here.

He did, and I finally found the reference. It was in his 1992 book,
"The Rediscovery of the Mind", pp 66-67. Here is a quote:

<...as the silicon is progressively implanted into your dwindling
brain, you find that the area of your conscious experience is
shrinking, but that this shows no effect on your external behavior.
You find, to your total amazement, that you are indeed losing control
of your external behavior. You find, for example, that when the
doctors test your vision, you hear them say, "We are holding up a red
object in front of you; please tell us what you see." You want to cry
out, "I can't see anything. I'm going totally blind." But you hear
your voice saying in a way that is completely out of your control, "I
see a red object in front of me." If we carry the thought-experiment
out to the limit, we get a much more depressing result than last time.
We imagine that your conscious experience slowly shrinks to nothing,
while your externally observable behavior remains the same.>

He is discussing here the replacement of neurons in the visual cortex
with functionally identical computer chips. He agrees that it is
possible to make functionally identical computerised neurons because
he accepts that physics is computable. He agrees that these p-neurons
will interact normally with the remaining b-neurons because they are,
by definition, functionally identical. He agrees that the behaviour of
the whole brain will continue as per normal because this also follows
necessarily if the p-neurons and remaining b-neurons behave normally.
However, he believes that consciousness will become decoupled from
behaviour: the patient will become blind, will realise he is blind and
try to cry out, but he will hear himself saying that everything is
normal and will be powerless to do anything about it. That would only
be possible if the patient is doing his thinking with something other
than his brain, although it doesn't seem that Searle realised this,
since he has always claimed that thinking is done with the brain and
there is no immaterial soul.

> In any case, in this experiment, I simply deny your claim that my position entails that the surgeon cannot keep the man walking.
>
> The surgeon starts with a patient with a semantic deficit caused by a brain lesion in Wernicke's area. He replaces those damaged b-neurons with p-neurons believing just as you do that they will behave and function in every respect exactly as would have the healthy b-neurons that once existed there. However on my account of p-neurons, they do not resolve the patient's symptoms and so the surgeon goes back in to attempt more cures, only creating more semantic issues for the patient.

Can you explain why you think the p-neurons won't be functionally
identical? It seems that you do believe (unlike Searle) that there is
something about neuronal behaviour that is not computable, otherwise
there would be nothing preventing the creation of p-neurons that are
drop-in replacements for b-neurons, guaranteed to leave behaviour
unchanged. As I have said before, this is a logically consistent
position; it would mean p-neurons, weak AI, the Chinese Room and
philosophical zombies might all be impossible. It is a scientific
rather than a philosophical question whether the brain utilises
uncomputable physics, and the standard scientific position is that it
doesn't.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list