[ExI] Consciouness and paracrap

Stathis Papaioannou stathisp at gmail.com
Fri Feb 19 10:51:35 UTC 2010


On 19 February 2010 11:15, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> --- On Wed, 2/17/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> The thought experiment involves replacing brain components
>> with artificial components that perfectly reproduce the I/O
>> behaviour of the original components, but not the consciousness.
>> Gordon agrees that this is possible. However, he then either claims that
>> the artificial components will not behave the same as the biological
>> components (even though it is an assumption of the experiment that they
>> will) or else says the experiment is ridiculous.
>
> You make what I consider an over-simplification when you assume here as you do that that the i/o behavior of a brain/neuron is all there is to the brain. To you it just seems "obvious" but I consider it anything but obvious. On your view, artificial neurons stuffed with mashed potatoes and gravy would work just fine provided they had the right i/o's.

The experiment generalises to any scale: replace the cell nucleus, a
ribosome, a cubic centimetre of brain tissue with a functionally
identical component. It would be hard to do this with mashed potato
and gravy, but if all the behaviour of the cell can be described
algorithmically, according to Church's thesis it can be modelled by a
digital computer. The computer would need sensors and effectors in
order to interact with neighbouring brain structures or the rest of
the environment.

> It does not even seem occur to you, for example, that consciousness may involve the electrical signals that travel down the axons internal to the neurons, or involve any number of a million other electrical or chemical processes *internal* to natural neurons.

The idea is to reproduce every detail of the neuron that affects its
behaviour as seen by another neuron. If you were to make a robot that
passes as human it would not do just to make it look human and have a
recording of a human voice: it would have to, for example, move like a
human and participate in a conversation like a human. There are two
ways in which this could be done. One way is to model the internal
workings of a human brain; the other way is to make a detailed model
from the observed external behaviour. It would not be easy to do this
for the whole brain or for any subset of the brain, but for the
purposes of the thought experiment this is not an issue. Imagine that
we are being studied by extremely advanced aliens who have no idea
whether we are conscious or not. The aliens scan individual neurons in
the hapless human's brain and from this information make little robot
neurons which they use to replace the original neurons one by one.
There is only one design requirement for the artificial neurons: that
they behave just like the biological neurons from the point of view of
the remaining biological neurons with which they interact. It may be,
for example, that the tiny electric field created by an electrical
impulse travelling down an axon affects in some subtle way the
behaviour of neurons up to a millimetre away. The aliens would figure
this out and might decide to reproduce the effect by controlling the
current in a solenoid mounted in the centre of each artificial neuron.
The important point is that they do not put the solenoid there because
it might have something to do with consciousness, since they neither
know nor care about our consciousness. They put the solenoid there
because otherwise the artificial neuron would behave abnormally,
causing the remaining biological neurons to behave abnormally, causing
the human to behave abnormally.

Now the question which I have asked several times is this: is it
possible for the aliens to make artificial brain components which
behave exactly the same as the biological components but lack
consciousness? Searle believes it is possible but I still don't know
what you believe. You say that it is possible, but then you claim (I
think) that the brain would start behaving abnormally if the
artificial components are installed, which could only happen if these
components behave abnormally. Could you please clarify your position?

> When pressed you say that your argument applies to the whole brain and not only to individual neurons, so let's take a look at that:
>
> Let us say that we created an artificial brain that contained a cubic foot of warm leftover mashed potatoes and gravy. Only the neurons on the exterior exist, but they have the i/o's of the neurons external to a natural brain so the brain as a whole has the same i/o behavior of a natural brain.
>
> Would your mister potato-head have consciousness? After all it has the same i/o's of a natural brain and you think nothing else matters.

It would have to be very special mashed potato and gravy because it
would have to do enough processing to sustain normal intelligence, but
if it did have this property then yes, Mr. Potato Head would have
normal consciousness. This may seem counter-intuitive, which is why in
all my posts I have started by assuming that you are right and the
artificial components would have normal behaviour but not
consciousness. This assumption leads to the conclusion that it is
possible to selectively remove an important aspect of a person's
consciousness and not only would he behave as if nothing had changed,
he would also not notice that he had become a part zombie. You
yourself have said that this is absurd. I agree that it is absurd,
which is why I am led to the conclusion that it is *not* possible to
create such zombie brain components. Either the artificial components
won't work properly and the person's behaviour will change, or they
will work properly and the person will have normal consciousness.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list