[ExI] The symbol grounding problem in strong AI

John Clark jonkc at bellsouth.net
Mon Dec 21 21:49:04 UTC 2009


Searle Wrote: 
> 
>  "Suppose we design a program that doesn't represent information that we have about the world, such as the information in Schank's scripts, but simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them. [...] Now surely in such a case we would have to say that the machine understood the stories; and if we refuse to say that, wouldn't we also have to deny that native
> Chinese speakers understood the stories? At the level of the synapses, what would or could be different about the program of the computer and the program of the Chinese brain?"
> Before countering this reply I want to digress to note that it is an odd reply for any partisan of artificial intelligence (or functionalism, etc.) to make: I thought the whole idea of strong AI is that we don't need to know how the brain works to know how the mind works.

I don't think we need to know how the human mind works (although it would certainly be helpful) to make an AI, but you do. So to convince you of the error of your ways we use a thought experiment that at the logical level slavishly works just like a human brain. And assuming your objection is valid you still haven't explained why it doesn't also prove that the native Chinese man also doesn't understand Chinese. You state that native Chinese speakers understand Chinese but if your objection is valid they can't comprehend it any more than the Chinese Room does.  

> However, even getting this close to the operation of the brain is still not sufficient to produce understanding.

I believe one of us does not understand understanding.

> To see this, imagine that instead of a mono lingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes. Now where is the understanding in this system?

That question is meaningless. Nouns have a position, understanding is not a noun, understanding has no position.

> It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn't understand Chinese, and neither do the water pipes

So you decree Mr. Searle, but some evidence of that would be nice, a proof would be even better. 

> and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands

You don't find it absurd that 3 pounds of grey goo in our head can have understanding even though not one of the 100 billion neurons that make it up has understanding. You don't find it absurd because you are accustomed to the idea. You do think its absurd for a room shuffling symbols to have understanding and it is true that not one of those symbols has understanding, but that's not who you find it absurd. You find it absurd because you are not accustomed to it; there can't be any other reason because logically the grey goo and the room are identical.

> remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination.

And after the human being does something superhuman, after a human does something that even a Jupiter Brain would be far too small to accomplish you just decree that no part of the human understands Chinese, you offer no proof or even one scrap of evidence to indicate that is indeed true, you just decree it and then claim to have proven something profound. I said "no part of the man" because I think a mind of that astronomical size would be a multitude. For that matter, I think there is a part of your mind and mine that doesn't understand English and yet we can both write screeds in English on the Internet. 

> The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states.

In the above you are demanding that somebody simulate the soul, and I don't believe any rational theory of mind would satisfy you as you would ALWAYS have one of two objections no matter what the theory:

1) Your theory of mind has reduced it to a huge number of parts called Part Z interacting with each other. But Part Z is very simple and very dull, and even the interactions it has with other parts is mundane. There must be more to a grand and mysterious thing like mind than that!

2) Your theory of mind has reduced it to Part Z, but part Z is still complex and mysterious so we still don't understand mind.

It's hopeless, nothing could satisfy you. As I said before one of us doesn't understand understanding.

> formal properties are not sufficient for the causal properties is shown by the water pipe example

Mr. Searle I've never heard you mention the name so I'm really really curious, have you ever heard of a fellow by the name of Charles Darwin? 

 John K Clark



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091221/451c03c6/attachment.html>


More information about the extropy-chat mailing list