[ExI] Wernicke's aphasia and the CRA.

Gordon Swobe gts_2000 at yahoo.com
Fri Dec 11 01:40:45 UTC 2009


-- On Wed, 12/9/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

>> And Searle says this beast called intentionality
> cannot live inside S/H systems. That's what his Chinese Room
> Argument is all about.
>
> And the counterargument is that of course the Chinese Room
> would have semantics and intentionality and all the other
> good things that the brain has.

If you formulate a good counter-argument to support that counter-thesis then I hope you will post it here! 

> Only if consciousness were a side-effect of intelligent behaviour
> would it have evolved.

I don't understand your meaning. Lots of non-adaptive traits have evolved as side-effects of adaptive traits. Do you count consciousness as such a non-adaptive trait, one that evolved alongside the adaptive trait of intelligence? Or do you mean to say that consciousness increases or aids intelligence, an adaptive trait?

In any case Searle rejects epiphenomenonalism -- the view that subjective mental events act only as "side-effects" and do not cause physical events. Searle thinks they do; that if you consciously will to raise your arm, and it rises, your conscious willing had something to do with the fact that it rose. (In this example the philosophical concept of intentionality corresponds with the ordinary meaning.)

>> Now then, IF we first come to understand those causal
> powers of brains and IF we then find a way to duplicate
> those powers in something other than brains, THEN we will
> create strong AI. On that day, pigs will fly.
>
> IF we simulate the externally observable behaviour of
> brains THEN we will create strong AI.

Do you mean to say that if something behaves exactly as if it has human intelligence, it must have strong AI? If so then we mean different things by strong AI.

-gts




      



More information about the extropy-chat mailing list