[ExI] Semiotics and Computability

Christopher Luebcke cluebcke at yahoo.com
Mon Feb 15 16:47:22 UTC 2010


I've just read the entirety of Searle's response to the systems argument (thank you for the link), and near as I can tell, it is that the system as a whole is not intelligent because, unlike the man, it doesn't really understand the symbols. Yet determining whether a given system understands symbols ought to be what we're finding out, not what we presume prior to beginning the experiment.

Double-quoting Searle:

"All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him"

Again, how one determines whether a system--a man, a room, a man in a room (or a savant in the woods)--"understands symbols" is crucial to the point, yet I find nothing in what I've read so far where Searle sets out the criteria by which he makes this judgement. It seems to me that Searle is rather making an a priori argument, believing that he (and we) all know ahead of time that the system comprised of the man and the rules is not intelligent, and then scoffs at any suggestion that it might be.

My response to the quote above is twofold:

1. I believe that understanding symbols is a process, not a state, and therefore to talk about what things comprise a system isn't nearly so useful as to talk about what it is they're doing
2. I believe that it is fruitless to debate whether a system understands symbols without having a common agreement on
   a. What it means to "understand symbols"
   b. How we measure or detect whether a system (a man, a room, a computer) can understand symbols

To me the problem is fundamentally empirical.


----- Original Message ----
From: Gordon Swobe <gts_2000 at yahoo.com>
To: ExI chat list <extropy-chat at lists.extropy.org>
Sent: Mon, February 15, 2010 8:10:30 AM
Subject: Re: [ExI] Semiotics and Computability

--- On Mon, 2/15/10, Christopher Luebcke <cluebcke at yahoo.com> wrote:

> If my understanding of the CRA is
> correct (it may not be), it seems to me that Searle is
> arguing that because one component of the system does not
> understand the symbols, the system doesn't understand the
> symbols. This to me is akin to claiming that because my
> fingers do not understand the words they are currently
> typing out, neither do I.

Searle speaks for himself:

My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.

Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so implausible to start with. The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. 

http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html

-gts


      
_______________________________________________
extropy-chat mailing list
extropy-chat at lists.extropy.org
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat




More information about the extropy-chat mailing list