[ExI] Semiotics and Computability
Stathis Papaioannou
stathisp at gmail.com
Sat Feb 20 02:25:15 UTC 2010
On 19 February 2010 12:41, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> --- On Thu, 2/18/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> Or 3) implementing programs leads to understanding.
>>
>> It seems that you just can't get past the very obvious
>> point that although the man has no understanding of language, he is
>> just a trivial part of the system, even if he internalises all the
>> components of the system. His intelligence is in fact mostly
>> superfluous. What he does is something a punchcard machine could do.
>> In fact, the same could be said of the intelligence of the man with
>> respect to knowledge of Chinese: it isn't a part of his cognitive
>> competence, not even as zombie intelligence. It's as if you had a being
>> of godlike intelligence (and consciousness) in your head whose only
>> job was to make the neurons fire in the correct sequence. Do you see
>> that such a being would not necessarily know anything about what you
>> were thinking about, and you would not necessarily know anything about
>> what it was thinking about?
>
> As if I had a "being with godlike intelligence in my head who makes the neurons fire"? Honestly Stathis I have no idea what you're talking about.
>
> The CRA thought experiment involves *you the reader* imagining *yourself* in the room (or as the room) using *your* mind to attempt to understand the Chinese symbols.
>
> Nobody wants to know about strange speculations of *something else* in or about your brain that might understand the symbols when you don't understand them. I mentioned the pink unicorns the other day for that reason. If mysterious pink unicorns in some mysterious place understand the symbols, but you have no access to their understanding, then Searle still got it right.
I am trying to show you that the fact that a system has an
intelligence that understands only the low level processes does not
preclude the existence of another intelligence that has higher level
understanding. The brain is exactly that sort of system, except that
the neurons are much dumber than a man. To even up the competition I
propose making the neurons much smarter.
Here is what you claim from the CRA: The man in the room has an
understanding of the low level processes but not of Chinese, even
though the room speaks Chinese. Therefore, the Chinese-speaking room
has no understanding of Chinese.
Here is my analogous claim: if your brain contained a
super-intelligent being that made the neurons fire in the appropriate
order it would have an understanding of the low level brain processes
but not of English, even though you speak English. Therefore, you
wouldn't really understand English.
If the latter experiment is silly, then the CRA is also silly.
However, both experiments are logically possible, which is what we are
interested in.
--
Stathis Papaioannou
More information about the extropy-chat
mailing list