[ExI] The Robot Reply to the CRA

Gordon Swobe gts_2000 at yahoo.com
Thu Jan 28 17:00:19 UTC 2010


Before I answer this message, Stathis... 

I picked up a copy of _Rediscovery of Mind_. Chalmers quoted Searle out of context, just as I suspected. Searle considers three different versions of the silicon brain thought experiment and does not endorse of any of them. In the first version the chip-brain works fine (the extropian pipe dream), in the second the relationship between mind and behavior is broken (Chalmers quotes a part of this) and in the third version the patient retains a mental life but becomes paralyzed. 

Just three sentences before the Chalmers quote, Searle adamantly rejects the notion that chips can duplicate the causal powers of neurons. He explains the different versions of the thought experiment only by way of bringing attention to some important ideas in the philosophy of mind. More later.

-gts

--- On Thu, 1/28/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

> From: Stathis Papaioannou <stathisp at gmail.com>
> Subject: Re: [ExI] The Robot Reply to the CRA
> To: gordon.swobe at yahoo.com, "ExI chat list" <extropy-chat at lists.extropy.org>
> Date: Thursday, January 28, 2010, 7:27 AM
> On 28 January 2010 01:32, Gordon
> Swobe <gts_2000 at yahoo.com>
> wrote:
> > --- On Tue, 1/26/10, Stathis Papaioannou <stathisp at gmail.com>
> wrote:
> >
> >> The symbols need to be associated with some
> environmental input,
> >> and then they have "meaning".
> >
> > Your idea seems at first glance to make a lot of
> sense, so let's go ahead and add sensors to our digital
> computer so that it gets environmental inputs that
> correspond to the symbols. Let's see what happens:
> >
> > http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_robot_reply.php
> 
> Firstly, I doubt that a computer without real world input
> could pass
> the TT, any more than a human who is suffers complete
> sensory
> deprivation from birth could pass it. I think that both the
> human and
> the computer might be conscious, dreaming away in a virtual
> reality
> world, but it would be a fantastic coincidence if the
> dreams
> corresponded to the real world objects that the rest of us
> observe,
> which is what would be required to pass the TT. It would be
> different
> if the human or computer were programmed with real world
> data, but the
> data then represents sensory input stored in memory.
> 
> Secondly, that article takes the CRA as primary, and not
> the assertion
> that syntax does not give rise to semantics, which you say
> the CRA is
> supposed to illustrate. If the original or robot CRA show
> what they
> claim to show, then they also show that the brain cannot
> have
> understanding, for surely the individual brain components
> have if
> anything even less understanding of what they are doing
> than the man
> in the room does. This is the systems response to the CRA.
> Searle's
> reply to this is "put the room in the man's head". This
> reply is
> evidence of a basic misunderstanding of what a system is.
> It seems
> that Searle accepts that individual neurons lack
> understanding and
> agrees that the ensemble of neurons working together has
> understanding. He then suggests putting the room in the
> man's head to
> show that in that case the man is the whole system, and the
> man still
> lacks understanding. But if the ensemble of neurons working
> together
> has understanding it does *not* mean that the ensemble of
> neurons have
> understanding! This is a subtle point and perhaps has not
> come across
> well when I have tried to explain it before. The best way
> to look at
> it is to modify the CRA so that instead of one man there
> are many men
> working together, maybe even one man for each neuron.
> Presumably you
> would say that this extended CR also lacks understanding,
> since all of
> the men lack understanding, either singly or collectively,
> if they got
> into a meeting to discuss their jobs. But how, then, does
> this differ
> from the situation of the brain?
> 
> 
> -- 
> Stathis Papaioannou
> 



      



More information about the extropy-chat mailing list