[ExI] The digital nature of brains (was: digital simulations)

Gordon Swobe gts_2000 at yahoo.com
Sat Jan 30 14:31:15 UTC 2010


--- On Fri, 1/29/10, Eric Messick <eric at m056832107.syzygy.com> wrote:

> Ok, I just went and read the whole thing.

Thank you Eric. 

> He then starts in on the whole Chinese room thing, which
> makes up the bulk of the paper.  Here's an interesting bit:
> 
>      To me, Chinese writing is just so
> many meaningless squiggles.
> 
> Here, the lack of understanding is only relative to
> him.  

Right. The question of strong AI concerns not whether outside observers understand the system's inputs and outputs. It's about whether the system itself understands them. 

> Later he will assert that a system which answers non-obvious
> questions about Chinese stories also has no meaning for the symbols. 
> It looks like he's extrapolated meaninglessness beyond its
> applicability.

He means that the system has no understanding.

> [...] in cases where the computer
> is not me, the computer has
> nothing more than I have in the
> case where I understand nothing.
> 
> Well, nothing except vast quantities of information about
> Chinese language sufficient to answer questions as well as a native
> speaker. He seems to consider this a trivial detail.

That information in the computer would seem important only to someone who did not understand the question of strong vs weak AI. If the system has no conscious understanding of the inputs and outputs but can nonetheless converse intelligently by virtue of having information then it has only weak AI. Searle has no objection to weak AI.

 
> On the possibility that understanding is "more symbol
> manipulation":
> 
>      I have not demonstrated that this
> claim is false,
>      but it would certainly appear an
> incredible claim in the example.
> 
> Searle is acknowledging that his argument is weak.

He's addressing this claim: "2) that what the machine and its program do explains the human ability to understand the story and answer questions about it." And clearly in the example what the machine and its program do does not explain the human ability to understand the story and answer questions about it. He answers questions about one story in English and about another in Chinese, and his running the program in Chinese in no way changes the fact that he does not understand a word of Chinese. 

As far as a non-Chinese-speaker's understanding of Chinese goes, it makes no difference whatsoever whether he mentally runs a program that enables meaningful interactions in Chinese. This has major implications in the philosophy of mind, especially with respect to that philosophy of mind known as the computationalist theory in which all our cognitive capacities are thought to be explained by programs. The program has zero effect. 


>      [...] what is suggested though
> certainly not demonstrated -- by
>      the example is that the computer
> program is simply irrelevant to
>      my understanding of the story.
> 
> Again, not demonstrated. 

His words take on more strength in context:

"On the basis of these two assumptions we assume that even if Schank's program isn't the whole story about understanding, it may be part of the story. Well, I suppose that is an empirical possibility, but not the slightest reason has so far been given to believe that it is true, since what is suggested though certainly not demonstrated -- by the example is that the computer program is simply irrelevant to my understanding of the story. In the Chinese case I have everything that artificial intelligence can put into me by way of a program, and I understand nothing; in the English case I understand everything, and there is so far no reason at all to suppose that my understanding has anything to do with computer programs, that is, with computational operations on purely formally specified elements. As long as the program is defined in terms of computational operations on purely formally defined elements, what the example suggests is that these by themselves
 have no interesting connection with understanding. They are certainly not sufficient conditions, and not the slightest reason has been given to suppose that they are necessary conditions or even that they make a significant contribution to understanding."

>      [...] whatever purely formal
> principles you put into the
>      computer, they will not be
> sufficient for understanding, since a
>      human will be able to follow the
> formal principles without
>      understanding anything.
> 
> Here he's simply asserting what he's trying to show. 

Here he states an analytic truth to those who understand what is meant by "formal principles". 

Briefly: We cannot first understand the meaning of a symbol from looking only at its form. We must learn the meaning in some other way, and attach that meaning to the form, such that we can subsequently recognize that form and know the meaning. 

> The human is a
> trivial component of the system, so its lack of
> understanding does not
> impair the system's understanding.

You miss the point here. The human can internalize the scripts and the entire room, becoming the system, and this in no way changes the conclusion that he and nothing inside him can understand the meanings of the symbols. 

>      My car and my adding machine, on
> the other hand, understand
>      nothing[.]
> 
> Once again, he's simply asserting something. 

Do you think your car has understanding of roads? How about doorknobs and screwdrivers? He makes a reductio ad absurdem argument here (but you leave out the context) illustrating that we must draw a line somewhere between those things that have minds and those that don't. 

> Again:
> 
>      The computer understanding is not
> just (like my understanding of
>      German) partial or incomplete; it
> is zero.
> 
> Why shouldn't it be partial?  Searle just asserts that
> it is zero.

It has zero understanding for the same reason DWAP's understanding is zero: syntax does not give semantics.

> On to the systems critique:
> 
>      Whereas the English subsystem
> knows that "hamburgers" refers to
>      hamburgers, the Chinese subsystem
> knows only that "squiggle
>      squiggle" is followed by "squoggle
> squoggle."
> 
> Here Searle makes a level of abstraction error.  The
> symbols may not mean anything to the human, but they certainly mean
> something to the system, or it wouldn't be able to answer questions about
> them, as we are told it can.

You fail to understand the distinction between strong and weak AI. Nobody disputes weak AI. Nobody disputes that computers will someday pass the Turing test. What is disputed is whether it will ever make sense to consider computers as possessing minds in the sense that humans have minds.


>      Indeed, in the case as described,
> the Chinese subsystem is simply
>      a part of the English subsystem, a
> part that engages in
>      meaningless symbol manipulation
> according to rules in English.
> 
> The symbols are meaningless because they are meaningless to
> *Searle*, not because they would be meaningless to the Chinese
> speaking system as a whole.

Apparently you believe that if you embodied the system as did Searle, and that if you did not understand the symbols as Searle didn't, that the system would nevertheless have a conscious understanding of the symbols. But I don't think you can articulate how. You just want to state it as article of faith.

 
>      But the whole point of the
> examples has been to try to show that
>      that couldn't be sufficient for
> understanding, in the sense in
>      which I understand stories in
> English, because a person, and
>      hence the set of systems that go
> to make up a person, could have
>      the right combination of input,
> output, and program and still not
>      understand anything in the
> relevant literal sense in which I
>      understand English.
> 
> Level of abstraction error again: because the human does
> not understand, the vastly greater system which it is a part of
> must not understand either.

Did you simply miss his counter-argument to the systems reply? *He becomes the system* and still does not understand the symbols. There exists now "vastly greater system" that understands them, unless you want to step foot into the religious realm.
 
>      In short, the systems reply simply
> begs the question by insisting
>      without argument that the system
> must understand Chinese.
> 
> Looks to me like Searle is projecting a bit of begging the
> question onto his criticizers.  Searle states as part of the
> problem that the system behaves as though it understands Chinese as well 
> as a native speaker.  He then repeatedly assumes that the system
> does not understand, and concludes that it does not understand.

If you cannot explain how it has conscious understanding then you have no reply to Searle. We cannot assume understanding based only on external behavior.
 
> The systems critique can be stated without an assumption
> of understanding:
> 
> If there is understanding, then it can reside outside of
> the human component.

Again, you must have missed Searle's counter-reply. He internalizes the entire system and yet neither he nor anything inside him understands the symbols.

-gts




      



More information about the extropy-chat mailing list