[ExI] Why the CRA is false, methodically

Will Steinberg steinberg.will at gmail.com
Mon Feb 22 06:00:47 UTC 2010


Searle's Chinese Room has become the heart of darkness of the recent
conversations.  The thing seems logically valid.  But the error is hidden
beneath layers of metaphor and an oversimplification of human thought.

The man in the room, given access to his books of responses, can only
utilize a limited set of data.  A totally plain "a for b" system would limit
the room to inputs whose answers never changes--base facts.  Any i/o on a
changing system--the people, environment, events in general--cannot function
with this limited set.

Another argument might say that the bookset instead contains the complete
possibility tree of human questions and responses.  Every question is
prefaced by a million if/then statements, in the vein of (If the person is a
man named Kenny who is 34 and has recently been a bit depressed and if
married to Laura whose father just died and today is their anniversary
and...etc. for a while, then respond "Hey Kenny, how has Laura been?"  The
assumption of such an improbably large set of books is more proof that the
CRA lies in a strictly theoretical realm.  Anyone can see that the brain is
not equivalent to this system.

Worst of all, Searle ignores reality and imagines his machine as magically
produced from nothing.  Even base physics dictate that building the machine
would entail the production of the books!  This apparently magic information
cannot come from nowhere, but instead had to have been compiled somehow, by
an understanding system!  In fact, the very rating system of how well the
machine understands Chinese is based on the opinions of...those who speak
Chinese!

With these road barriers, there is only one way around.  The man must be a
scribe as well, changing rules in his books according to other, higher level
instructions.  When there are enough of THESE instructions, the machine will
seem conscious.  But in doing so, the machine has had to incorporate
environmental information!  The man, though he can have a lack of
understanding of the symbols themselves, can know his rules perfectly as to
know which words go after which.  The only thing stopping him from being
totally aware is some way to associate these words with symbols.  If he only
knew what a few words actually meant, the man would be able to define more
based on context.  These could only be acquired through "senses," which
would transmit environmental symbolic signatures to the room, allowing the
man to associate things he knew with the pictures.

If this physically possible, logically valid form of the CRA is used, then
the man, knowing the rules and the symbols, will have gone through an
identical process to one all of us have undertaken: learning a language.

The ONLY functional version of Searle's Room implies that the man has gained
understanding of a language, at least as much as our brains do with their
methods. By the time the information gathering required to produce the
existence of the system is complete, it will have formed logically
equivalent structures in the man's head.  No matter how you run it,
learning, by a human, in a sense we can ALL agree on, MUST be applied
somewhere.  The information has to come from somewhere and has to go
somewhere.  And it is exactly the same--environmental cues being applied
meta-algorithmically to a human mind, which will still carry on with its
bizarre human intelligence.

The CRA is a tautology.  Understanding=understanding, it comes with the
package.  Please give this a thought before continuing to use the flawed
version, though a valid and true version could help for analysis.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20100222/e7db42e6/attachment.html>


More information about the extropy-chat mailing list