[ExI] Wernicke's aphasia and the CRA.

Gordon Swobe gts_2000 at yahoo.com
Fri Dec 11 13:27:19 UTC 2009


--- On Fri, 12/11/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> Perhaps you could do the work for me and prove that *you*
> have semantics and intentionality and aren't just a zombie
> computer program.

That tactic of the sceptic leads to a sort of solipsism, but I agree we could go down that rabbit hole if we really want to...

<here we sit now in the sceptic's rabbit hole>
Say there, Stathis. I notice that I have intentionality even down here in the sceptic's rabbit hole. I find it hard to hold in mind the idea that I don't have it because to hold anything whatsoever in mind is to have it. How about you? Do you have anything whatsoever in mind? :-)
<exiting the rabbit hole>

> Why don't we discuss whether intelligence is an epiphenomenon
> rather than consciousness? It's not my intelligence that makes me
> writes this, it is motor impulses to my hands, intelligence being 
> a mere side-effect of this sort of neural activity with no causal 
> role of its own.

Well, from where I sit it sure seems that your hands write intelligent emails in the physical world and that something exhibiting intelligence must account for that fact. I don't mind if you choose not to call it your own intelligence. Call it whatever you please, but whatever you do choose to call it, I cannot consider it epiphenomenal. Epiphenomenal things cannot affect the physical world.


> No, I mean that if you replace the brain a neuron at a time
> by electronic analogues that function the same, i.e. same
> output for same input so that the neurons yet to be replaced 
> respond in the same way, then the resulting brain will not only 
> display the same behaviour but will also have the same consciousness. 

How will you know this?

> Searle considers the neural replacement scenario and declares that 
> the brain will behave the same outwardly but will have a different 
> consciousness. The aforementioned
> paper by Chalmers shows why this is impossible.

Chalmers is a functionalist, (or at least he sometimes wears that hat), and yes Searle disagrees with functionalism and its close relative behaviorism.

In a nutshell, we might speculate and hope that a functional analogue of the brain will have consciousness, but until we understand why biological brains have it, we will never know if anything else has it. 

Without that knowledge of the brain, functionalism has some serious problems: some philosophers have shown, for example, that we could construct a functional analogue of the brain out of beer cans and toilet paper. Pretty hard to imagine that contraption having anything like semantics, but in principle that contraption acts no different from the one Chalmers has in mind. 

No matter how you construct that brain-like contraption, you won't find anything inside it to explain semantics/intentionality. On the inside it will look just like any other contraption. Actually Leibniz first figured this out hundreds of years ago.


-gts


      



More information about the extropy-chat mailing list