[ExI] Is the brain a digital computer?
stathisp at gmail.com
Sat Feb 27 03:53:22 UTC 2010
On 27 February 2010 01:39, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> --- On Fri, 2/26/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>> You have no problem with the idea that an AI could behave
>> like a human but you don't think it could behave like a neuron.
> The Turing test defines (weak) AI and neurons cannot take the Turing test,
> so I don't know what it means to speak of an AI behaving like a neuron.
I'm mystified by this response of yours. Yes, the TT involves a
machine talking to humans and trying to convince them that it has a
mind. But surely you can see that this test was proposed because
language is taken to be one of the most difficult things for a machine
to pull off? It is usually taken as given that if a philosophical
zombie can trick a human with its lively conversation it won't then go
and give itself away with its blank stare and inability to walk
without arms and legs fully extended. The controversial question is
whether it is possible for an AI which is not conscious to behave as
if it is conscious. If you agree that an AI can do this, then you
should agree that it can copy all of the behaviour of a conscious
entity, both that which is dependent on consciousness and that which
is not. Thus the AI should be able to behave like a human, a flatworm,
an amoeba or a neuron.
>> The task is to replace all the components of a neuron with
>> artificial components so that the neuron behaves just the same.
> If and when we understand how neurons cause consciousness, we will perhaps
> have it on our power to make the kind of artificial neurons you want.
> They'll work a lot like biological neurons, and might work exactly like
> them. We might need effectively to get into the business of manufacturing
> biological neurons, rendering the distinction between artificial and
> natural meaningless.
We don't necessarily need to understand anything about consciousness
or cognition in order to do this. The extreme example is to copy a
neuron atom for atom: it will function exactly the same as the
original, including consciousness, even if the alien engineers are
convinced that human brains are too primitive to be conscious.
>> Are you saying that however hard the aliens try, they
>> won't be able to get the modified neuron to control
>> neurotransmitter release in the same way as the original neuron?
> No, I mean that where consciousness is concerned, I don't believe digital
> computations of its causal mechanisms will do the trick. To
> understand me here, you need to understand what I wrote a few days ago
> about the acausal and observer-relative nature of computations.
So you *are* saying that the aliens will fail to copy the behaviour of
a neuron if they use computational mechanisms. They may be able to get
neurotransmitter release right - that was just an example - but there
will be some other function the NCC performs that affects the which
they just won't be able to reproduce, no matter how advanced their
computers. The modified neuron will, on close examination, deviate
from the behaviour of the original neuron, and if installed in the
brain the brain's behaviour and hence the person's behaviour will also
be different. The aliens will conclude, while still suspecting nothing
about human consciousness, that the neuron is not Turing emulable, and
they will have to use a hypercomputer if they want to copy its
More information about the extropy-chat