[ExI] Semiotics and Computability
stathisp at gmail.com
Tue Feb 16 10:56:12 UTC 2010
On 16 February 2010 03:10, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> --- On Mon, 2/15/10, Christopher Luebcke <cluebcke at yahoo.com> wrote:
>> If my understanding of the CRA is
>> correct (it may not be), it seems to me that Searle is
>> arguing that because one component of the system does not
>> understand the symbols, the system doesn't understand the
>> symbols. This to me is akin to claiming that because my
>> fingers do not understand the words they are currently
>> typing out, neither do I.
> Searle speaks for himself:
> My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.
> Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so implausible to start with. The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible.
I have proposed the example of a brain which has enough intelligence
to know what the neurons are doing: "neuron no. 15,576,456,757 in the
left parietal lobe fires in response to noradrenaline, then breaks
down the noradrenaline by means of MAO and COMT", and so on, for every
brain event. That would be the equivalent of the man in the CR: there
is understanding of the low level events, but no understanding of the
high level intelligent behaviour which these events give rise to. Do
you see how there might be *two* intelligences here, a high level and
a low level one, with neither necessarily being aware of the other?
More information about the extropy-chat