[ExI] Semiotics and Computability

Gordon Swobe gts_2000 at yahoo.com
Fri Feb 19 01:41:38 UTC 2010


--- On Thu, 2/18/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

> Or 3) implementing programs leads to understanding.
>
> It seems that you just can't get past the very obvious
> point that although the man has no understanding of language, he is
> just a trivial part of the system, even if he internalises all the
> components of the system. His intelligence is in fact mostly
> superfluous. What he does is something a punchcard machine could do. 
> In fact, the same could be said of the intelligence of the man with 
> respect to knowledge of Chinese: it isn't a part of his cognitive 
> competence, not even as zombie intelligence. It's as if you had a being 
> of godlike intelligence (and consciousness) in your head whose only
> job was to make the neurons fire in the correct sequence. Do you see
> that such a being would not necessarily know anything about what you
> were thinking about, and you would not necessarily know anything about
> what it was thinking about?

As if I had a "being with godlike intelligence in my head who makes the neurons fire"? Honestly Stathis I have no idea what you're talking about.

The CRA thought experiment involves *you the reader* imagining *yourself* in the room (or as the room) using *your* mind to attempt to understand the Chinese symbols. 

Nobody wants to know about strange speculations of *something else* in or about your brain that might understand the symbols when you don't understand them. I mentioned the pink unicorns the other day for that reason. If mysterious pink unicorns in some mysterious place understand the symbols, but you have no access to their understanding, then Searle still got it right. 

-gts


      



More information about the extropy-chat mailing list