[ExI] Wernicke's aphasia and the CRA.

Stathis Papaioannou stathisp at gmail.com
Fri Dec 11 05:10:43 UTC 2009


2009/12/11 Gordon Swobe <gts_2000 at yahoo.com>:

>> And the counterargument is that of course the Chinese Room
>> would have semantics and intentionality and all the other
>> good things that the brain has.
>
> If you formulate a good counter-argument to support that counter-thesis then I hope you will post it here!

Perhaps you could do the work for me and prove that *you* have
semantics and intentionality and aren't just a zombie computer
program.

>> Only if consciousness were a side-effect of intelligent behaviour
>> would it have evolved.
>
> I don't understand your meaning. Lots of non-adaptive traits have evolved as side-effects of adaptive traits. Do you count consciousness as such a non-adaptive trait, one that evolved alongside the adaptive trait of intelligence? Or do you mean to say that consciousness increases or aids intelligence, an adaptive trait?

I suppose it's possible that nature could have given rise to zombies
that behave like humans, but it seems unlikely.

> In any case Searle rejects epiphenomenonalism -- the view that subjective mental events act only as "side-effects" and do not cause physical events. Searle thinks they do; that if you consciously will to raise your arm, and it rises, your conscious willing had something to do with the fact that it rose. (In this example the philosophical concept of intentionality corresponds with the ordinary meaning.)

I find the whole idea of epiphenomenalism muddled and unhelpful. Why
don't we discuss whether intelligence is an epiphenomenon rather than
consciousness? It's not my intelligence that makes me writes this, it
is motor impulses to my hands, intelligence being a mere side-effect
of this sort of neural activity with no causal role of its own.

>>> Now then, IF we first come to understand those causal
>> powers of brains and IF we then find a way to duplicate
>> those powers in something other than brains, THEN we will
>> create strong AI. On that day, pigs will fly.
>>
>> IF we simulate the externally observable behaviour of
>> brains THEN we will create strong AI.
>
> Do you mean to say that if something behaves exactly as if it has human intelligence, it must have strong AI? If so then we mean different things by strong AI.

No, I mean that if you replace the brain a neuron at a time by
electronic analogues that function the same, i.e. same output for same
input so that the neurons yet to be replaced respond in the same way,
then the resulting brain will not only display the same behaviour but
will also have the same consciousness. Searle considers the neural
replacement scenario and declares that the brain will behave the same
outwardly but will have a different consciousness. The aforementioned
paper by Chalmers shows why this is impossible.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list