[ExI] The digital nature of brains (was: digital simulations)
eric at m056832107.syzygy.com
Sat Jan 30 03:54:08 UTC 2010
>If you have a genuine interest in this subject and want to engage me
>in intelligent discussion then please carefully read the target
>MINDS, BRAINS, AND PROGRAMS
Ok, I just went and read the whole thing.
I think we've pretty well covered everything in there numerous times
in this discussion already. I'll note a few things, though.
Early on, Searle characterizes weak and strong AI, saying in effect
that weak AI attempts to study human cognition, while strong AI
attempts to duplicate it.
But according to strong AI, the computer is not merely a tool in
the study of the mind; rather, the appropriately programmed
computer really is a mind, in the sense that computers given the
right programs can be literally said to understand and have other
He then starts in on the whole Chinese room thing, which makes up the
bulk of the paper. Here's an interesting bit:
To me, Chinese writing is just so many meaningless squiggles.
Here, the lack of understanding is only relative to him. Later he
will assert that a system which answers non-obvious questions about
Chinese stories also has no meaning for the symbols. It looks like
he's extrapolated meaninglessness beyond its applicability.
[...] in cases where the computer is not me, the computer has
nothing more than I have in the case where I understand nothing.
Well, nothing except vast quantities of information about Chinese
language sufficient to answer questions as well as a native speaker.
He seems to consider this a trivial detail.
On the possibility that understanding is "more symbol manipulation":
I have not demonstrated that this claim is false,
but it would certainly appear an incredible claim in the example.
Searle is acknowledging that his argument is weak.
[...] what is suggested though certainly not demonstrated -- by
the example is that the computer program is simply irrelevant to
my understanding of the story.
Again, not demonstrated. Good thing too, since it's the computer
program that is doing *all* of the understanding in the example.
[...] whatever purely formal principles you put into the
computer, they will not be sufficient for understanding, since a
human will be able to follow the formal principles without
Here he's simply asserting what he's trying to show. The human is a
trivial component of the system, so its lack of understanding does not
impair the system's understanding.
My car and my adding machine, on the other hand, understand
Once again, he's simply asserting something. Again:
The computer understanding is not just (like my understanding of
German) partial or incomplete; it is zero.
Why shouldn't it be partial? Searle just asserts that it is zero.
On to the systems critique:
Whereas the English subsystem knows that "hamburgers" refers to
hamburgers, the Chinese subsystem knows only that "squiggle
squiggle" is followed by "squoggle squoggle."
Here Searle makes a level of abstraction error. The symbols may not
mean anything to the human, but they certainly mean something to the
system, or it wouldn't be able to answer questions about them, as we
are told it can.
Indeed, in the case as described, the Chinese subsystem is simply
a part of the English subsystem, a part that engages in
meaningless symbol manipulation according to rules in English.
The symbols are meaningless because they are meaningless to *Searle*,
not because they would be meaningless to the Chinese speaking system
as a whole.
But the whole point of the examples has been to try to show that
that couldn't be sufficient for understanding, in the sense in
which I understand stories in English, because a person, and
hence the set of systems that go to make up a person, could have
the right combination of input, output, and program and still not
understand anything in the relevant literal sense in which I
Level of abstraction error again: because the human does not
understand, the vastly greater system which it is a part of must not
In short, the systems reply simply begs the question by insisting
without argument that the system must understand Chinese.
Looks to me like Searle is projecting a bit of begging the question
onto his criticizers. Searle states as part of the problem that the
system behaves as though it understands Chinese as well as a native
speaker. He then repeatedly assumes that the system does not
understand, and concludes that it does not understand.
The systems critique can be stated without an assumption of
If there is understanding, then it can reside outside of the human
Which is still enough to devastate most of Searle's claims, as he's
always relying on his statement that the human doesn't understand to
support the notion that there is no understanding.
It is, by the way, not an answer to this point to say that the
Chinese system has information as input and output and the
stomach has food and food products as input and output, since
from the point of view of the agent, from my point of view, there
is no information in either the food or the Chinese -- the
Chinese is just so many meaningless squiggles.
Searle claims here that there is no information in the meaningless (to
him) Chinese symbols. I'm going to invoke Shannon here. Even without
any meaning, the Chinese symbols are different from each other, and so
they hold information. Searle can tell the difference between
different characters without knowing what they mean. It's just plain
wrong to say that there is no information. It is an important mistake
too, as this whole thing revolves around the notion of information
Such a basic mistake does not give me confidence in Searle's
conclusions about these matters. Repeated unsubstantiated assertions
don't help either.
It is not the aim of this article to argue against McCarthy's
point, so I will simply assert the following without argument.
Except that it is the aim of the article to argue against McCarthy's
point. At least here he's acknowledging his unsubstantiated
On to the robot criticism:
Now in this case I want to say that the robot has no intentional
states at all;
Searle *wants* the robot not to be intentional. He's attached to the
outcome, and it motivates his unsubstantiated assertion.
On to brain simulators:
I thought the whole idea of strong AI is that we don't need to
know how the brain works to know how the mind works.
That's not how he defined strong AI above. Just write a program with
understanding, no need for ignorance about brain function.
As long as it simulates only the formal structure of the sequence
of neuron firings at the synapses, it won't have simulated what
matters about the brain, namely its causal properties, its
ability to produce intentional states.
Again, asserted without support.
If we could build a robot whose behavior was indistinguishable
over a large range from human behavior, we would attribute
intentionality to it, pending some reason not to. We wouldn't
need to know in advance that its computer brain was a formal
analogue of the human brain.
Searle presupposes that we have built such a robot, but again just
asserts without support that it won't have intentionality:
We would certainly make similar assumptions about the robot
unless we had some reason not to, but as soon as we knew that the
behavior was the result of a formal program, and that the actual
causal properties of the physical substance were irrelevant we
would abandon the assumption of intentionality.
Well, clearly Searle abandons that assumption, but I see no reason to.
Searle does not supply a reason.
Let us now return to the question I promised I would try to
[Could a machine think?]
granted that in my original example I understand the English and
I do not understand the Chinese, and granted therefore that the
machine doesn't understand either English or Chinese
He's asking us to grant him just what he's asking about! How much
more blatant could he be?
I am not at all sure what he means by the parenthetical here:
It is not because I am the instantiation of a computer program
that I am able to understand English and have other forms of
intentionality (I am, I suppose, the instantiation of any number
of computer programs)
Of course the brain is a digital computer. Since everything is a
digital computer, brains are too.
What?! Everything is a digital computer? That's just absurd. I have
no clue what he's trying to say here. He's not attributing this to
someone he's criticizing, though. I see nothing to suggest that it
isn't a serious statement. I can't attach a coherent meaning to that
string of symbols, though.
More information about the extropy-chat