[ExI] Wernicke's aphasia and the CRA.

Stathis Papaioannou stathisp at gmail.com
Sat Dec 12 22:41:12 UTC 2009


2009/12/13 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Sat, 12/12/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> When I say that the artificial neurons are "functionally
>> equivalent" I am referring to their externally observable behaviour.
>> Functionalism is the theory that the mind would follow if the
>> externally observable behaviour is taken care of, and is what is at
>> issue here.
>
> More accurately, functionalism is the theory that if one constructed a brain-like contraption the components of which carried out the same functions as a real brain, mind would follow, no matter how one implemented those functions. Correct me if I have it wrong but I believe functionalism so defined describes your position, though behaviorism certainly plays a role in it.

Yes, that's right. It's the behaviour of the neurons that is
important. It is possible that someone with a completely different and
differently-functioning brain to mine is a very good actor and copies
my behaviour, but he probably won't experience what I experience. But
if my behaviour were copied by making a machine that copies my brain
function, perhaps in a different substrate, then my mind will also be
copied.

> It seems you would not care how we constructed those neurons, provided they squirted the same neurotransmitters and emitted the same electrical signals between themselves, i.e., that they performed the same functions as real biological neurons. Yes?
>
> We could on that view construct a contraption the size of Texas with gigantic neurons constructed of, say, band-aids, Elmers glue, beer cans and toilet paper. Provided those neurons squirted the same chemicals and signals betwixt themselves as in a real brain, would you consider the contraption conscious? And if so, why? How would those particular neurotransmitters and signals cause consciousness? And if you take it only on pure faith that they would do so, and offer no scientific explanation, then on grounds can you justify your claim to have created a blue-print for strong AI?

If you consider my question below you will see that it has been
justified with the strength of logical necessity.

>> Searle, on the other hand, claims that weak AI is possible but strong
>> AI impossible, which is inconsistent. The neural replacement experiment > I described shows why this is so, and you haven't addressed it.
>
> I think I have addressed it, actually, but perhaps I misunderstood you. I've asked for clarification above.

You haven't explained what you think would happen if part of your
brain, say your visual cortex, were replaced with artificial neurons
which interacted with the remaining biological neurons in the same way
as the originals would have, while themselves lacking the ingredients
for consciousness.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list