[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Mon Dec 21 14:07:15 UTC 2009


--- On Mon, 12/21/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> But a S/H system is a physical system, like a brain. You
> claim that the computer lacks something the brain has: that it is 
> only syntactic, and syntax does not entail semantics. 

Right.

> But even if it 
> is true that syntax does not entail semantics, how can you be sure that
> the brain has the extra ingredient for semantics and the computer
> does not, and how does the CR argument show this? You've admitted that 
> it isn't because the the parts of the CR have 
> components with independent intelligence and you've admitted that it 
> isn't because the operation of the CR has an algorithmic description 
> and that of the brain does not. What other differences between brains 
> computers are there which are illustrated by the CRA? (Don't say that 
> the brain has understanding while the computer or CR does not: that is
> the thing in dispute).

I can't heed the first part of your prohibition at the end. You know your brain has understanding as surely as you can understand the words in this sentence. If you understand anything whatsoever, you have semantics. And you can reasonably locate that capacity in your brain because when your brain loses consciousness, you no longer have it.

The experiment in the CRA shows that programs don't have it because the man representing the program can't grok Chinese even if the syntactic rules of the program enable him to speak it fluently.

The same thing happens to be true in English too, and even of natural brains that know English. It's not so easy to see, but you cannot understand English sentences merely from knowing their syntactic structure, or merely from following syntactic rules. Syntactic rules are form based, not semantics based.

Programs manipulate symbols according to their forms. A program takes an input like for example "What day of week is it?" It looks at the *forms* of the words in the question to determine the operation it must perform to generate a proper output. It does not look at or know the *meanings* of the words. The meaning of the output comes from the human who reads it or hears t. If we want to say that the program has semantics then we must say it has what philosophers of the subject call "derived semantics", meaning that the program derives its semantics from the human operator.

> Although the CRA does not show that computers can't be
> conscious, 

It shows that even if computers *did* have consciousness, they still would have no understanding the meanings of the symbols contained in their programs. The conscious Englishman in the room represents a program operating on Chinese symbols. He cannot understand Chinese no matter how well he performs those operations.

I'll try again to answer your partial brain replacement scenario again later. Sorry not putting you off... out of time

-gts






      



More information about the extropy-chat mailing list