[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Mon Jan 4 11:50:36 UTC 2010


2010/1/4 Gordon Swobe <gts_2000 at yahoo.com>:

> suggested abbreviations and conventions:
>
> m-neurons = material ("clockwork") artificial neurons
> p-neurons = programmatic artificial neurons

I'll add two more:

b-neurons = biological neurons
c-neurons = consciousness-capable neurons

You claim:
all b-neurons are c-neurons
some m-neurons are c-neurons
no p-neurons are c-neurons

> Sam = the patient with the m-neurons
> Cram = the patient with the p-neurons (CRA-man)
>
> (If Sam and Cram look familiar it's because I used these names in a similar thought experiment of my own design.)
>
>> Firstly, I understand that you have no philosophical
>> objection to the idea that the clockwork neurons *could* have
>> consciousness, but you don't think that they *must* have consciousness,
>> since you don't (to this point) believe as I do that behaving like normal
>> neurons is sufficient for this conclusion. Is that right?
>
> No, because I reject epiphenomenalism I think Sam cannot pass the TT without genuine intentionality. If Sam's m-neurons fail to result in a passing TT score for Sam then we have no choice but to take his m-neurons back to the store and demand a refund.

It seems to me you must accept some type of epiphenomenalism if you
say that Cram can pass the TT while having different experiences to
Sam. This also makes it impossible to ever study the NCC
scientifically. This experiment would be the ideal test for it: the
p-neurons function like c-neurons but without the NCC, yet Cram
behaves the same as Sam. There is therefore no way of knowing that you
have actually taken out the NCC.

>> Moreover, if consciousness is linked to substrate rather than function
>> then it is possible that the clockwork neurons are conscious but with
>> a different type of consciousness.
>
> If Sam passes the TT and reports normal subjective experiences from m-neurons then I will consider him cured. I have no concerns about "type" of consciousness.

As you agreed in a later post, only some m-neurons are c-neurons. It
could be that an internal change in a m-neuron could turn it from a
c-neuron to a ~c-neuron. But it seems you are saying there is no in
between state: it is either a c-neuron or a ~c-neuron. Moreover, you
seem to be saying that there is only one type of c-neuron that could
fill the shoes of the original b-neuron, although presumably there are
different m-neurons that could give rise to this c-neuron. Is that
right?

>> Secondly, suppose we agree that clockwork neurons can give
>> rise to consciousness. What would happen if they looked like
>> conventional clockwork at one level but at higher resolution we could
>> see that they were driven by digital circuits, like the digital mechanism
>> driving most modern clocks with analogue displays? That is, would
>> the low level computations going on in these neurons be enough to
>> change or eliminate their consciousness?
>
> Yes. In that case the salesperson deceived us. He sold us p-neurons in a box labeled m-neurons. And if we cannot detect the digital nature of these neurons from careful physical inspection and must instead conceive of some digital platonic realm that drives or causes material objects then you will have introduced into our experiment the quasi-religious philosophical idea of substance or property dualism.

Suppose the m-neuron (which is a c-neuron) contains a mechanism to
open and close sodium channels depending on the transmembrane
potential difference. Would changing from an analogue circuit to a
digital circuit for just this mechanism change the neuron from a
c-neuron to a ~c-neuron? If not, then we could go about systematically
replacing the analogue subsystems in the neuron until we have a pure
p-neuron. At some point, according to what you have been saying, the
neuron would suddenly switch from being a c-neuron to a ~c-neuron. Is
it plausible that changing, say, one op-amp out of billions would have
such drastic effect? On the other hand, what could it mean if the
neuron's (and hence the person's) consciousness smoothly decreased in
proportion to its degree of computerisation?

>> Finally, the most important point. The patient with the computerised
>> neurons behaves normally and says he feels normal.
>
> Yes.
>
>> Moreover, he actually believes he feels normal and that he understands
>> everything said to him, since otherwise he would tell us something is
>> wrong.
>
> No, he does not "actually" believe anything. He merely reports that he feels normal and reports that he understands. His surgeon programmed all p-neurons such that he would pass the TT and report healthy intentionality, including but not limited to p-neurons in Wernicke's area.

This is why the experiment considers *partial* replacement. Even
before the operation Cram is not a zombie: despite not understanding
language he can see, hear, feel, recognise people and objects,
understand that he is sick in hospital with a stroke, and he certainly
knows that he is conscious. After the operation he has the same
feelings, but in addition he is pleased to find that he now
understands what people say to him, just as he remembers before the
stroke. That is, he behaves as if he understands what people say to
him and he honestly believes that he understands what people say to
him; whereas before the operation he behaves as if he lacks
understanding and he knows that he lacks understanding, since when
people speak to him it sounds like gibberish. So the post-op Cram is a
very strange creature: he can have a normal conversation, appearing to
understand everything said to him, honestly believing that he
understands everything said to him, while in fact he doesn't
understand a word.

On the above account, it is difficult to make any sense of the word
"understanding". Surely a person who believes he understands language
and behaves as if he understands language does in fact understand
language. If not, what more could you possibly require of him? You
seem to understand me and (though I can't know another person's
thoughts for sure) I take your word that you honestly believe you
understand me, but this is exactly what would happen if you had been
through Cram's operation as well; so it's possible that the ham
sandwich you had for lunch yesterday destroyed the NCC in your
language centre, and you just haven't noticed.

The only other possibility if p-neurons are ~c-neurons is that Cram
does in fact realise that he has no more understanding after the
surgery than he did before, but can't do anything about it. He
attempts to lash out and smash things in frustration but his body
won't obey him, and he observes himself making meaningless noises
which the treating team apparently understand to be some sort of
thank-you speech. I believe that this is what Searle has said would
happen, though it is some time since I came across the paper and I
can't now find it. It would mean that Cram would be doing his thinking
with something other than his brain, which is forced to behave as if
everything was fine.

So if p-neurons are ~c-neurons this leads to either partial zombies or
extra-brain thought. There's no other way around it. Both
possibilities are pretty weird, but I would say that the partial
zombies offend logic while the extra-brain thought offends science. Do
you still claim that the idea of a computer having a mind is more
absurd than either of these two absurdities?


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list