[ExI] The digital nature of brains (was: digital simulations)

Stathis Papaioannou stathisp at gmail.com
Sat Jan 30 15:55:25 UTC 2010


2010/1/31 Gordon Swobe <gts_2000 at yahoo.com> wrote in response to Eric Messick:

>>      [...] whatever purely formal
>> principles you put into the
>>      computer, they will not be
>> sufficient for understanding, since a
>>      human will be able to follow the
>> formal principles without
>>      understanding anything.
>>
>> Here he's simply asserting what he's trying to show.
>
> Here he states an analytic truth to those who understand what is meant by "formal principles".

A neuron will also be able to follow the formal principles without
understanding anything, or at any rate understanding much less than a
human doing the same job.

> Briefly: We cannot first understand the meaning of a symbol from looking only at its form. We must learn the meaning in some other way, and attach that meaning to the form, such that we can subsequently recognize that form and know the meaning.

Yes, symbol grounding, which occurs when you have sensory input. That
completely solves the logical problem of where symbols get their
meaning, but Searle goes on to postulate a superfluous, magical
further step whereby symbols get *true* meaning.

>> The human is a
>> trivial component of the system, so its lack of
>> understanding does not
>> impair the system's understanding.
>
> You miss the point here. The human can internalize the scripts and the entire room, becoming the system, and this in no way changes the conclusion that he and nothing inside him can understand the meanings of the symbols.

But the human's *intelligence* is irrelevant to the system, except
insofar as it allows him to do the symbol manipulation. It makes no
essential difference to the consciousness of the system, such as it
may be, if the symbol manipulation is done by a human, a punchcard
machine or a trained mouse. Stretching the definition of the term,
neurons also have a small amount of intelligence since they have to
know when to fire and when not to fire. But you don't argue that since
the neurons don't understand the ultimate result of their behaviour,
the brain as a whole doesn't understand it either.

> It has zero understanding for the same reason DWAP's understanding is zero: syntax does not give semantics.

Except that this is wrong: syntax does give semantics, once the
symbols are grounded.

> You fail to understand the distinction between strong and weak AI. Nobody disputes weak AI. Nobody disputes that computers will someday pass the Turing test. What is disputed is whether it will ever make sense to consider computers as possessing minds in the sense that humans have minds.

Everyone who disputes that computers can have minds also either
disputes weak AI or is self-contradictory. You've admitted that it's
absurd to say that you might be a zombie and not know it, and yet weak
AI would make such an absurdity possible. Do you deny that? You've
avoided dealing with it, but you haven't actually denied it: "I don't
think that weak AI would allow the creation of a partial zombie
because..."

Note that this is *not* an argument about computers and minds per se,
but an argument about the possibility of weak AI. Weak AI presents a
logical contradiction. Not even God could do it.

>> The symbols are meaningless because they are meaningless to
>> *Searle*, not because they would be meaningless to the Chinese
>> speaking system as a whole.
>
> Apparently you believe that if you embodied the system as did Searle, and that if you did not understand the symbols as Searle didn't, that the system would nevertheless have a conscious understanding of the symbols. But I don't think you can articulate how. You just want to state it as article of faith.

It would happen by the same magical process that occurs in the brain.
When the system is complex enough to display humanlike behaviour,
humanlike consciousness results.

> Did you simply miss his counter-argument to the systems reply? *He becomes the system* and still does not understand the symbols. There exists now "vastly greater system" that understands them, unless you want to step foot into the religious realm.

Did you simply miss the counter-argument to the counter-argument? His
intelligence is simply a trivial component of the system. That the
neurons lack understanding or that the heart which is an essential
component pumping blood to the neurons lacks understanding does not
mean that the person, comprised of multiple components organised in a
system, lacks understanding. No-one has ever claimed that transistors
and copper wires understand the grand computations they participate
in.

>>      In short, the systems reply simply
>> begs the question by insisting
>>      without argument that the system
>> must understand Chinese.
>>
>> Looks to me like Searle is projecting a bit of begging the
>> question onto his criticizers.  Searle states as part of the
>> problem that the system behaves as though it understands Chinese as well
>> as a native speaker.  He then repeatedly assumes that the system
>> does not understand, and concludes that it does not understand.
>
> If you cannot explain how it has conscious understanding then you have no reply to Searle. We cannot assume understanding based only on external behavior.

That is all that we can ever do.

>> The systems critique can be stated without an assumption
>> of understanding:
>>
>> If there is understanding, then it can reside outside of
>> the human component.
>
> Again, you must have missed Searle's counter-reply. He internalizes the entire system and yet neither he nor anything inside him understands the symbols.

It doesn't magically elevate him from his position as a low level part
of the system if he internalises everything.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list