[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Thu Jan 7 02:07:59 UTC 2010


--- On Wed, 1/6/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

>> I don't think Searle ever considered a thought experiment exactly like 
>> the one we created here.
> 
> He did...

You've merely re-quoted that same paragraph from that same Chalmers paper that you keep referencing. That experiment hardly compares to your much more ingenious one. :) 

As you point out:

> He is discussing here the replacement of neurons in the
> visual cortex....

But here we do something much more profound and dramatic: we replace the semantic center(s) of the brain, presumably integral to both spoken and unspoken thought.

> He agrees that it is possible to make functionally identical computerised
> neurons because he accepts that physics is computable. 

He accepts that physics is computable, and that the brain is computable, but he certainly would not agree that your p-neurons act "functionally identical" to b-neurons if we include in that definition c-neuron capability.

> However, he believes that consciousness will become
> decoupled from behaviour: the patient will become blind, will realise he
> is blind and try to cry out, but he will hear himself saying that
> everything is normal and will be powerless to do anything about it. That
> would only be possible if the patient is doing his thinking with
> something other than his brain...

Looks to me that he does his thinking with that portion of his natural brain that still exists. Searle goes on to describe how as the experiment progresses and more microchips take the place of those remaining b-neurons, the remainder of his natural brain vanishes along with his experience.

> ...he has always claimed that thinking is done with the brain and there 
> is no immaterial soul.

Right. So perhaps Searle used some loose language in a few sentences and perhaps you misinterpreted him based on those sentences from a single paragraph taken out of context in paper written by one his critics. Better to look at his entire philosophy.

>> The surgeon starts with a patient with a semantic
>> deficit caused by a brain lesion in Wernicke's area. He
>> replaces those damaged b-neurons with p-neurons believing
>> just as you do that they will behave and function in every
>> respect exactly as would have the healthy b-neurons that
>> once existed there. However on my account of p-neurons, they
>> do not resolve the patient's symptoms and so the surgeon
>> goes back in to attempt more cures, only creating more
>> semantic issues for the patient.
> 
> Can you explain why you think the p-neurons won't be
> functionally identical? 

You didn't reply to a fairly lengthy post of mine yesterday so perhaps you missed my answer to that question. I'll cut, paste and add to my own words...

You've made the same assumption (wrongly imo) as in your last experiment that p-neurons will behave and function exactly like the b-neurons they replaced. They won't except perhaps under epiphenomenalism, the view that experience plays no role in behavior. 

If you accept epiphenomenalism and reject the common and in my opinion more sensible view that experience does affect behavior then we need to discuss that philosophical problem before we can go forward. (Should we?)

Speaking as one who rejects epiphenomenalism, it looks to me that serious complications will arise for the first surgeon who attempts this surgery with p-neurons. Why?

Because...

1) experience affects behavior, and 
2) behavior includes neuronal behavior, and  
3) experience of one's own understanding of words counts as a very important kind of experience,

It follows that:

Non-c-neurons in the semantic center of the brain will not behave like b-neurons. And because the p-neurons in Cram's brain in my view equal non-c-neurons, they won't behave like the b-neurons they replaced.

Does that make sense to you? I hope so.

This conclusion seems much more apparent to me in this new experimental set-up of yours. In your last, I wrote something about how the subject might turn left when he would otherwise have turned right. In this experiment I see that he might turn left onto a one-way street in the wrong direction. Fortunately for Cram (or at least for his body) the docs won't release him from the hospital until he passes the TT and reports normal subjective experiences. Cram's surgeon will keep replacing and programming neurons throughout his entire brain until his patient appears ready for life on the streets, zombifying much or all his brain in the process.

> It seems that you do believe (unlike Searle) that there is
> something about neuronal behaviour that is not computable,

No I don't suppose anything non-computable about them. But I do believe that mere computational representations of b-neurons, (aka p-neurons), do not equal c-neurons. 

> otherwise there would be nothing preventing the creation of p-neurons
> that are drop-in replacements for b-neurons, guaranteed to leave
> behaviour unchanged. 

See above re: epiphenomenalism.

-gts




      



More information about the extropy-chat mailing list