[ExI] The digital nature of brains (was: digital simulations)

Stathis Papaioannou stathisp at gmail.com
Mon Feb 1 14:28:04 UTC 2010


2010/2/2 Gordon Swobe <gts_2000 at yahoo.com>:

>> P: It is possible to make artificial neurons which behave
>> like normal neurons in every way, but lack consciousness.
>
> P = true if we define behavior as you've chosen to define it: the exchange of certain neurotransmitters into the synapses at certain times and other similar exchanges between neurons.

Yes, that would be one aspect of the behaviour that needs to be reproduced.

> I reject as absurd for example your theory that a brain the size of texas constructed of giant neurons made of beer cans and toilet paper will have consciousness merely by virtue of those beer cans squirting neurotransmitters betwixt themselves in the same patterns that natural neurons do.

That is a consequence of functionalism but at this point functionalism
is assumed to be wrong. All we need is artificial neurons that fit
inside the head (which excludes structures the size of Texas) and can
fool their neighbours into thinking they are normal neurons.

>  I also reject, in the first place, your implied assumption that the neuron is necessarily the atomic unit of the brain.

OK, P can be made even more general by replacing "neuron" with
"component". The component could be subneuronal in size or a
collection of multiple neurons.It just has to behave normally in
relation to its neighbours.

>> OK, assuming P is true, what happens to a person's
>> behaviour and to his experiences if the neurons in a part of his
>> brain with an important role in consciousness are replaced with these
>> artificial neurons?
>
> As I explained many times, because your artificial neurons will not help the patient have complete subjective experience,

Yes, that's an essential part of P: no subjective experiences

> and because experience affects behavior in healthy people, the surgeon will need to keep re-programming the artificial neurons and most likely replacing and reprogramming other neurons until finally at long last he creates a patient that passes the Turing test. But that patient will not have any better quality consciousness than he started with, and may become far worse off subjectively by the time the surgeon finishes, depending on facts about neuroscience that in 2010 nobody knows.

But how? We agreed that the artificial components BEHAVE NORMALLY.
That is their essential feature, apart from lacking consciousness. You
remove any normal component whatsoever, drop in the replacement, and
the behaviour of the whole brain MUST remain unchanged, or else the
replacement component is not as assumed. I can't believe that you
don't see this, and after being inconsistent being disingenuous is the
worst sin you can commit in philosophical discussions.

> Eric offered a more straightforward experiment in which he simulated the entire brain. You complicate the matter by doing partial replacements, but the principles that drive my arguments remain the same: formal programs do not have or cause minds. If they did, the computer in front of you this very moment would have a mind and would perhaps be entitled to vote like other citizens.

You keep repeating it but it doesn't make it so. I have assumed that
what you are saying is true and tried to show you that it leads to an
absurdity, but you respond by saying that if A behaves exactly the
same as B then A does not behave exactly the same as B, and carry on
as if no-one will notice the problem with this!


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list