[ExI] The digital nature of brains (was: digital simulations)

Stathis Papaioannou stathisp at gmail.com
Sun Jan 31 00:33:51 UTC 2010


2010/1/31 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Sat, 1/30/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> A neuron will also be able to follow the formal principles
>> without understanding anything, or at any rate understanding much
>> less than a human doing the same job.
>
> I don't disagree but it misses the point. In Searle's reply to his systems critics, he becomes the system and *neither he nor anything inside him* can understand the symbols. You reply "Yeah well neurons don't know anything either but the system does". Do you see how that misses the point? *We can no longer compare the man to a neuron in a larger system*. We cannot do so because the man becomes the entire system, and his neurons lack understanding just as he does.
>
> He no longer exists as part of a larger system that might understand the symbols, unless you want to step foot into the domain of religion and claim that some god understands the symbols that he cannot understand. Is that your claim?

He is the whole system, but his intelligence is only a small and
inessential part of the system, as it could easily be replaced by
dumber components. It's irrelevant that the man doesn't really
understand what he is doing. The ensemble of neurons doesn't
understand what it's doing either, and they are the whole system too.
Even if the neurons were somehow linked as one organism, which knew
exactly how and when to fire its constituent parts, the intelligence
involved in doing this would be separate submind, with its actions
resulting in the more impressive human mind.

Another way to look at this is what I called the extended CRA (which
is similar to Ned Block's Chinese Nation argument): instead of one
man, there are two or more cooperating. This is now closer to the
behaviour of the brain. Would you say that this system can have
consciousness even though the single man CR cannot?

>>> Briefly: We cannot first understand the meaning of a
>> symbol from looking only at its form. We must learn the
>> meaning in some other way, and attach that meaning to the
>> form, such that we can subsequently recognize that form and
>> know the meaning.
>>
>> Yes, symbol grounding, which occurs when you have sensory
>> input. That completely solves the logical problem of where symbols get
>> their meaning
>
> I created the 'robot reply to the cra' thread to discuss this, but  haven't pursued it mainly because it makes no sense until you understand the basic CRA. Every serious rebuttal to the CRA -- all the serious rebuttals by serious philosophers of the subject including those who advocate the robot reply -- starts with a recognition that if nothing else Searle makes a good point that:
>
> A3: syntax is neither constitutive of nor sufficient for semantics.
>
> It's because of A3 that the man in the room cannot understand the symbols. I started the robot thread to discuss the addition of sense data on the mistaken belief that you had finally recognized the truth of that axiom. Do you recognize it now?

No, I assert the very opposite: that meaning is nothing but the
association of one input with another input. You posit that there is a
magical extra step, which is completely useless and undetectable by
any means.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list