[ExI] The Robot Reply to the CRA

ablainey at aol.com ablainey at aol.com
Thu Jan 28 16:47:25 UTC 2010


 Echooooooo, echo, echo



 


 

 

-----Original Message-----
From: Stathis Papaioannou <stathisp at gmail.com>
To: gordon.swobe at yahoo.com; ExI chat list <extropy-chat at lists.extropy.org>
Sent: Thu, 28 Jan 2010 12:27
Subject: Re: [ExI] The Robot Reply to the CRA


On 28 January 2010 01:32, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> --- On Tue, 1/26/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> The symbols need to be associated with some environmental input,
>> and then they have "meaning".
>
> Your idea seems at first glance to make a lot of sense, so let's go ahead and 
add sensors to our digital computer so that it gets environmental inputs that 
correspond to the symbols. Let's see what happens:
>
> http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_robot_reply.php

Firstly, I doubt that a computer without real world input could pass
the TT, any more than a human who is suffers complete sensory
deprivation from birth could pass it. I think that both the human and
the computer might be conscious, dreaming away in a virtual reality
world, but it would be a fantastic coincidence if the dreams
corresponded to the real world objects that the rest of us observe,
which is what would be required to pass the TT. It would be different
if the human or computer were programmed with real world data, but the
data then represents sensory input stored in memory.

Secondly, that article takes the CRA as primary, and not the assertion
that syntax does not give rise to semantics, which you say the CRA is
supposed to illustrate. If the original or robot CRA show what they
claim to show, then they also show that the brain cannot have
understanding, for surely the individual brain components have if
anything even less understanding of what they are doing than the man
in the room does. This is the systems response to the CRA. Searle's
reply to this is "put the room in the man's head". This reply is
evidence of a basic misunderstanding of what a system is. It seems
that Searle accepts that individual neurons lack understanding and
agrees that the ensemble of neurons working together has
understanding. He then suggests putting the room in the man's head to
show that in that case the man is the whole system, and the man still
lacks understanding. But if the ensemble of neurons working together
has understanding it does *not* mean that the ensemble of neurons have
understanding! This is a subtle point and perhaps has not come across
well when I have tried to explain it before. The best way to look at
it is to modify the CRA so that instead of one man there are many men
working together, maybe even one man for each neuron. Presumably you
would say that this extended CR also lacks understanding, since all of
the men lack understanding, either singly or collectively, if they got
into a meeting to discuss their jobs. But how, then, does this differ
from the situation of the brain?


-- 
Stathis Papaioannou
_______________________________________________
extropy-chat mailing list
extropy-chat at lists.extropy.org
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20100128/9bf77f21/attachment.html>


More information about the extropy-chat mailing list