[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Sun Jan 3 09:35:08 UTC 2010


2010/1/3 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Fri, 1/1/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> Right, I asked you the question from the point of view of
>> a concrete-thinking technician. This simpleton sets about
>> building artificial neurons from parts he buys at Radio Shack
>> without it even occurring to him that the programs these parts run
>> are formal descriptions of real or supposed objects which simulate but
>> do not equal the objects. When he is happy that his artificial
>> neurons behave just like the real thing he has his friend the surgeon,
>> also technically competent but not philosophically inclined,
>> install them in the brain of a patient rendered aphasic after a stroke.
>
> The surgeon replaces all those neurons relevant to correcting the patient's aphasia with a-neurons programmed and configured in such a way that the patient will pass the Turing test while appearing normal and healthy. We don't know in 2009 if this requires work in areas outside Wernicke's but we'll assume our surgeon here knows.
>
> The TT and the subject's reported symptoms represent the surgeon's only means of measuring the supposed health of his patient.
>
>> We can add a second part to the experiment in which the technician
>> builds another set of artificial neurons based on clockwork nanomachinery
>> rather than digital circuits and has them installed in a second
>> patient, the idea being that the clockwork neurons do not run formal
>> programs.
>
> A second surgeon does the same with this patient, releasing him from the hospital after he appears healthy and passes the TT.
>
>> You then get to talk to the patients. Will both patients be
>> able to speak equally well?
>
> Yes.
>
>> If so, would it be right to say that one understands what he is saying
>> and the other doesn't?
>
> Yes. On Searle's view the TT gives false positives for the first patient.
>
>> Will the patient with the clockwork neurons report he feels normal while
>> the other one reports he feels weird? Surely you should be able to
>> observe *something*.
>
> If either one appears or reports feeling abnormal, we send him back to the hospital.

Thank-you for clearly answering the question. Now some problems.

Firstly, I understand that you have no philosophical objection to the
idea that the clockwork neurons *could* have consciousness, but you
don't think that they *must* have consciousness, since you don't (to
this point) believe as I do that behaving like normal neurons is
sufficient for this conclusion. Is that right? Moreover, if
consciousness is linked to substrate rather than function then it is
possible that the clockwork neurons are conscious but with a different
type of consciousness.

Secondly, suppose we agree that clockwork neurons can give rise to
consciousness. What would happen if they looked like conventional
clockwork at one level but at higher resolution we could see that they
were driven by digital circuits, like the digital mechanism driving
most modern clocks with analogue displays? That is, would the low
level computations going on in these neurons be enough to change or
eliminate their consciousness?

Finally, the most important point. The patient with the computerised
neurons behaves normally and says he feels normal. Moreover, he
actually believes he feels normal and that he understands everything
said to him, since otherwise he would tell us something is wrong. He
processes the verbal information processed in the artificial part of
his brain (Wernicke's area) and passed to the rest of his brain
normally: for example, if you describe a scene he can draw a picture
of it, if you tell him something amusing he will laugh, and if you
describe a complex problem he will think about it and propose a
solution. But despite this, he will understand nothing, and will
simply have the delusional belief that he has normal understanding. Or
in the case with the clockwork neurons, he may have an alien type of
understanding, but again behave normally and have the delusional
belief that his understanding is normal.

That a person could be a zombie and not know it is logically possible,
since a zombie by definition doesn't know anything; but that a person
could be a partial zombie and be systematically unaware of this even
with the non-zombified part of his brain seems to me incoherent. How
do you know that you're not a partial zombie now, unable to understand
anything you are reading? What reason is there to prefer normal
neurons to computerised zombie neurons given that neither you nor
anyone else can ever notice a difference? This is how far you have to
go in order to maintain the belief that neural function and
consciousness can be separated. So why not accept the simpler,
logically consistent and scientifically plausible explanation that is
functionalism?

I suppose at this point you might return to the original claim, that
semantics cannot be derived from syntax, and argue that it is strong
enough to justify even such weirdness as partial zombies. But this
isn't the case. I actually believe that semantics can *only* come from
syntax, but if it can't, your fallback is that semantics comes from
the physical activity inside brains. Thus, even accepting Searle's
argument, there is no *logical* reason why semantics could not derive
from other physical activity, such as the physical activity in a
computer implementing a program.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list