[ExI] Semiotics and Computability
stathisp at gmail.com
Tue Feb 16 13:54:27 UTC 2010
On 16 February 2010 23:45, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> --- On Tue, 2/16/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>> I have proposed the example of a brain which has enough
>> intelligence to know what the neurons are doing: "neuron no.
>> 15,576,456,757 in the left parietal lobe fires in response to
>> noradrenaline, then breaks down the noradrenaline by means of MAO and
>> COMT", and so on, for every brain event. That would be the equivalent of
>> the man in the CR: there is understanding of the low level events, but no
>> understanding of the high level intelligent behaviour which these events
>> give rise to. Do you see how there might be *two* intelligences here, a
>> high level and a low level one, with neither necessarily being aware of
>> the other?
> Doesn't matter. If you cannot see yourself understanding the symbols as either the man considered as the program (IN the room or AS a neuron) or as the man considered as the system (AS the room or AS a brain) then Searle has proved his point.
> And it seems he has proved his point to you, but that you want nevertheless to fabricate some imaginary way around the conclusion. These attempts of yours amount to saying "Suppose that even though Searle is right that the man cannot understand the symbols either as the program or as the system, pink unicorns on the moon do nevertheless understand the symbols." :)
I think you have missed the point: even though we agree that the man
who internalises the room has no understanding, this does *not* mean
that the system has no understanding. The man's intelligence is only a
component of the system even if the man internalises the room.
As a general comment, it is normal in philosophical debate to set up a
sometimes complex argument, thought experiment etc. in order to prove
a point which the parties disagree on. I might think that the whole
CRA and the idea it purports to prove is ridiculous, but it's bad form
to just dismiss an argument like that. Instead, I have to pick it
apart, show where there are hidden assumptions, or think of a
variation which leads to the opposite conclusion. This sometimes leads
to the pursuit of what you may consider is a minor technical point,
while you are eager to return to restating what you consider is the
big picture. But it is important to pursue these apparently minor
technical points, since if they fall, the whole argument falls. That
does not necessarily mean the initial proposition was wrong, but it
does mean that the particular argument chosen to support it is wrong,
and cannot be used any more.
More information about the extropy-chat