[ExI] Some new angle about AI

Stathis Papaioannou stathisp at gmail.com
Sun Jan 3 03:08:47 UTC 2010


2010/1/3 Lee Corbin <lcorbin at rawbw.com>:

Good to hear from you again, Lee.

> For those of us who are functionalists (or, in my case, almost
> 100% functionalists), it seems almost inconceivable that the causal
> components of an entity's having an experience require anything
> beneath the neuron level. In fact, it's very likely that the
> simulation of whole neuron tracks or bundles suffice.

The only reason to simulate the internal processes of a neuron is that
you can't otherwise be sure what it's going to do. For example, the
neuron may have decided, in response to past events and because of the
type of neuron it is, that it is going to increase production of
dopamine receptors, decrease production of MAO and increase production
of COMT (both enzymes that break down dopamine and other
catecholamines). This is going to change the neuron's sensitivity to
dopamine in a complex way, and therefore the neuron's behaviour, and
therefore the whole brain's behaviour. In your model of the neuron you
need a "sensitivity to dopamine" function which takes as variables the
neuron's present state and all the inputs acting on it. If you can
figure out what this function is by treating the neuron as a black box
then, implicitly, you have modelled its internal processes even though
you might not know what dopamine receptors, COMT or MAO are. However,
it might be easier to get this function if you model the internal
processes explicitly.

I could go further and say that it isn't necessary even to simulate
the behaviour of a neuron in order to simulate the brain. You could
use cubic millimetres of brain tissue as the basic unit, ignoring
natural biological boundaries such as cell membranes. If you can
predict the cube's outputs in response to inputs, you can predict the
behaviour of the whole brain. But for practical reasons, it would be
easier to do the modelling at least at the cellular level.

> But I have no way of going forward to address Gordon's
> question. Logically, we have no way of knowing that in
> order to emulate experience, you have to simulate every
> single gluon, muon, quark, and  electron. However, we
> can *never* in principle (so far as I can see) begin to
> answer that question, because ultimately, all we'll
> finally have to go on is behavior (with only a slight
> glance at the insides).

I think the argument from partial brain replacement that I have put
forward to Gordon shows that if you can reproduce the behaviour of the
brain, then you necessarily also reproduce the consciousness.
Simulating neurons and molecules is just a means to this end.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list