[ExI] Wernicke's aphasia and the CRA.

Stathis Papaioannou stathisp at gmail.com
Sat Dec 12 14:46:53 UTC 2009


2009/12/12 The Avantguardian <avantguardian2020 at yahoo.com>:

>> No, I mean that if you replace the brain a neuron at a time by
>> electronic analogues that function the same, i.e. same output for same
>> input so that the neurons yet to be replaced respond in the same way,
>> then the resulting brain will not only display the same behaviour but
>> will also have the same consciousness. Searle considers the neural
>> replacement scenario and declares that the brain will behave the same
>> outwardly but will have a different consciousness. The aforementioned
>> paper by Chalmers shows why this is impossible.
>
> I don't think we understand the functioning of neurons enough to buy either Searle or Chalmer's argument. Your neuron by neuron brain replacement assumes that neurons are functionally degenerate. That one neuron is equivalent to any other. By the logic of this thought experiment, if you were to replace your neurons one by one with Gordon's neurons, at the end you would still be you. But you could just as easily become Gordon or at least Gordon-esque. At least that's what I take from the neuroscience experiment described in this Time article:
>
> http://www.time.com/time/magazine/article/0,9171,986057,00.html
>
> Of course how much of Stathis, or Gordon for that matter, is a learned trait as opposed to a hardwired one is a matter for debate. But still, it gives you food for thought.

The replacement would have to involve artificial neurons that are
*functionally equivalent*. Quail neurons are apparently not
functionally equivalent replacements for chicken neurons, going on the
evidence in the article you cited, and it wouldn't be surprising if
one human's neurons are not functionally equivalent to another's
either.

I've never accepted simplistic notions of mind uploading that hold
that all the information needed is a map of the neural connections. To
properly model a brain you may need go down all the way to the
molecular level, which would of course require extremely fine scanning
techniques and a fantastic amount of computing power. Nevertheless,
unless there is something fundamentally non-computable in the brain, a
computer model should be possible, and this is sufficient to make the
case for functionalism.



-- 
Stathis Papaioannou



More information about the extropy-chat mailing list