[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Mon Dec 21 15:29:57 UTC 2009


2009/12/22 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Mon, 12/21/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> But a S/H system is a physical system, like a brain. You
>> claim that the computer lacks something the brain has: that it is
>> only syntactic, and syntax does not entail semantics.
>
> Right.
>
>> But even if it
>> is true that syntax does not entail semantics, how can you be sure that
>> the brain has the extra ingredient for semantics and the computer
>> does not, and how does the CR argument show this? You've admitted that
>> it isn't because the the parts of the CR have
>> components with independent intelligence and you've admitted that it
>> isn't because the operation of the CR has an algorithmic description
>> and that of the brain does not. What other differences between brains
>> computers are there which are illustrated by the CRA? (Don't say that
>> the brain has understanding while the computer or CR does not: that is
>> the thing in dispute).
>
> I can't heed the first part of your prohibition at the end. You know your brain has understanding as surely as you can understand the words in this sentence. If you understand anything whatsoever, you have semantics. And you can reasonably locate that capacity in your brain because when your brain loses consciousness, you no longer have it.

I know my brain has understanding but it is at least provisionally an
open question whether computers, or systems with only syntax, do. You
can't assume that they don't as part of your argument to prove that
they don't.

> The experiment in the CRA shows that programs don't have it because the man representing the program can't grok Chinese even if the syntactic rules of the program enable him to speak it fluently.

Forget any prejudices you may have. You are an alien scientist and you
observe a Chinese speaker and a CR, both of which seem to speak fluent
Chinese, a language which you have managed to learn from radio
transmissions. You are not sure if either of them actually understands
Chinese, and if so which one. The man in the CR freely admits to you
in English that he does not speak Chinese. What do you conclude?

That the man in the CR does not speak Chinese does not bias you
against the CR in your assessment of its understanding, since the
cells of the brain are obviously too stupid to understand anything at
all, let alone Chinese. So if either the brain or the man understands
Chinese it is an emergent, or high level property supervening on the
low level behaviour of their components, not a simple property of the
components themselves. It could be due to the action potentials in the
neurons of the left temporal lobe, or to the flurry of card-sorting
activity by man in the CR, particularly involving the thumb and index
finger of the right hand, since this is what is observed when the
Chinese speaking is most active. With very careful observation you can
pick out more specific patterns: a consistent sequence of neuronal
firings or card-shuffling whenever the Chinese word for "dog" is
heard, for example.

After long observation you come to these conclusions: (1) you can't be
absolutely sure that either of the subjects actually understands what
they are saying, and (2) there is no basis for saying chemical
reactions are more likely to yield understanding that card-sorting is,
or vice-versa.

> The same thing happens to be true in English too, and even of natural brains that know English. It's not so easy to see, but you cannot understand English sentences merely from knowing their syntactic structure, or merely from following syntactic rules. Syntactic rules are form based, not semantics based.
>
> Programs manipulate symbols according to their forms. A program takes an input like for example "What day of week is it?" It looks at the *forms* of the words in the question to determine the operation it must perform to generate a proper output. It does not look at or know the *meanings* of the words. The meaning of the output comes from the human who reads it or hears t. If we want to say that the program has semantics then we must say it has what philosophers of the subject call "derived semantics", meaning that the program derives its semantics from the human operator.

Brains also just respond in a deterministic way, taking an input and
producing an output. In so doing they sometimes derive meaning from
the input. Why cannot the physical activity in computers or the CR
also derive meaning, if the physical activity in brains can?

>> Although the CRA does not show that computers can't be
>> conscious,
>
> It shows that even if computers *did* have consciousness, they still would have no understanding the meanings of the symbols contained in their programs. The conscious Englishman in the room represents a program operating on Chinese symbols. He cannot understand Chinese no matter how well he performs those operations.

And if the neurons in the brain had a separate consciousness, even
linked in a swarm mind (so that the conscious entity comprises the
entire system), they wouldn't necessarily understand anything beyond
their low level operations either. That is trivially obvious; you
don't need the CRA to demonstrate it.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list