[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Thu Jan 7 06:52:33 UTC 2010


2010/1/7 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Wed, 1/6/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>>> I don't think Searle ever considered a thought experiment exactly like
>>> the one we created here.
>>
>> He did...
>
> You've merely re-quoted that same paragraph from that same Chalmers paper that you keep referencing. That experiment hardly compares to your much more ingenious one. :)
>
> As you point out:
>
>> He is discussing here the replacement of neurons in the
>> visual cortex....
>
> But here we do something much more profound and dramatic: we replace the semantic center(s) of the brain, presumably integral to both spoken and unspoken thought.

You can see though that it's just a special case. We could replace
neurons in any part of the brain, affecting any aspect of cognition.

>> He agrees that it is possible to make functionally identical computerised
>> neurons because he accepts that physics is computable.
>
> He accepts that physics is computable, and that the brain is computable, but he certainly would not agree that your p-neurons act "functionally identical" to b-neurons if we include in that definition c-neuron capability.

Functionally identical *except* for consciousness, in the same way
that a philosophical zombie is functionally identical except for
consciousness. All a p-neuron has to do is pass as a normal neuron as
far as the b-neurons are concerned, i.e. produce the same outputs in
response to the same inputs. Are you claiming that it is possible for
a zombie to fool intelligent and fully conscious humans but impossible
for a p-neuron to fool b-neurons? That doesn't sound plausible, but if
it is the case, it simply means that there is something about the
behaviour of neurons which is not computable. You can't say both that
the behaviour of neurons is computable *and* that it's impossible to
make p-neurons which behave like b-neurons.

>> However, he believes that consciousness will become
>> decoupled from behaviour: the patient will become blind, will realise he
>> is blind and try to cry out, but he will hear himself saying that
>> everything is normal and will be powerless to do anything about it. That
>> would only be possible if the patient is doing his thinking with
>> something other than his brain...
>
> Looks to me that he does his thinking with that portion of his natural brain that still exists. Searle goes on to describe how as the experiment progresses and more microchips take the place of those remaining b-neurons, the remainder of his natural brain vanishes along with his experience.

Yes, but the problem is that the natural part of his brain is
constrained to behave in the same way as if there had been no
replacement, since the p-neurons send it the same output. It's
impossible for the rest of the brain to behave differently. Searle
seems to acknowledge this because he accepts that the patient will
behave normally, i.e. will have normal motor output. However, he
thinks the patient will have abnormal thoughts which he will be unable
to communicate! Where do these thoughts come from, if all the
b-neurons in the brain are behaving normally? They can only come from
something other than the neurons. If you have another explanation,
please provide it.

>> ...he has always claimed that thinking is done with the brain and there
>> is no immaterial soul.
>
> Right. So perhaps Searle used some loose language in a few sentences and perhaps you misinterpreted him based on those sentences from a single paragraph taken out of context in paper written by one his critics. Better to look at his entire philosophy.

This is *serious* problem for Searle, invalidating his entire thesis
that it is possible to make brain components that behave normally but
lack consciousness. It simply isn't possible. I think even you are
seeing this, since to avoid the problem you now seem to be suggesting
that it isn't really possible to make zombie p-neurons at all.

>>> The surgeon starts with a patient with a semantic
>>> deficit caused by a brain lesion in Wernicke's area. He
>>> replaces those damaged b-neurons with p-neurons believing
>>> just as you do that they will behave and function in every
>>> respect exactly as would have the healthy b-neurons that
>>> once existed there. However on my account of p-neurons, they
>>> do not resolve the patient's symptoms and so the surgeon
>>> goes back in to attempt more cures, only creating more
>>> semantic issues for the patient.
>>
>> Can you explain why you think the p-neurons won't be
>> functionally identical?
>
> You didn't reply to a fairly lengthy post of mine yesterday so perhaps you missed my answer to that question. I'll cut, paste and add to my own words...
>
> You've made the same assumption (wrongly imo) as in your last experiment that p-neurons will behave and function exactly like the b-neurons they replaced. They won't except perhaps under epiphenomenalism, the view that experience plays no role in behavior.
>
> If you accept epiphenomenalism and reject the common and in my opinion more sensible view that experience does affect behavior then we need to discuss that philosophical problem before we can go forward. (Should we?)
>
> Speaking as one who rejects epiphenomenalism, it looks to me that serious complications will arise for the first surgeon who attempts this surgery with p-neurons. Why?
>
> Because...
>
> 1) experience affects behavior, and
> 2) behavior includes neuronal behavior, and
> 3) experience of one's own understanding of words counts as a very important kind of experience,
>
> It follows that:
>
> Non-c-neurons in the semantic center of the brain will not behave like b-neurons. And because the p-neurons in Cram's brain in my view equal non-c-neurons, they won't behave like the b-neurons they replaced.
>
> Does that make sense to you? I hope so.

It makes sense. You are saying that the NCC affects neuronal
behaviour, and the NCC is that part of neuronal behaviour that cannot
be simulated by computer, since if it could you could program the
p-neurons to adjust their I/O behaviour accordingly. Therefore,
neurons must contain uncomputable physics in the NCC.

> This conclusion seems much more apparent to me in this new experimental set-up of yours. In your last, I wrote something about how the subject might turn left when he would otherwise have turned right. In this experiment I see that he might turn left onto a one-way street in the wrong direction. Fortunately for Cram (or at least for his body) the docs won't release him from the hospital until he passes the TT and reports normal subjective experiences. Cram's surgeon will keep replacing and programming neurons throughout his entire brain until his patient appears ready for life on the streets, zombifying much or all his brain in the process.
>
>> It seems that you do believe (unlike Searle) that there is
>> something about neuronal behaviour that is not computable,
>
> No I don't suppose anything non-computable about them. But I do believe that mere computational representations of b-neurons, (aka p-neurons), do not equal c-neurons.

There *must* be something uncomputable about the behaviour of neurons
if it can't be copied well enough to make p-neurons, artificial
neurons which behave exactly like b-neurons but lack the essential
ingredient for consciousness. This isn't a contingent fact, it's a
logical requirement.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list