[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Sat Jan 2 01:21:50 UTC 2010


2010/1/2 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Fri, 1/1/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>>> I'll put an experiment to you, and you tell me what
>>> the answer should be:
>>>
>>> "Please imagine that your brain exists as partly real
>>> and partly as an abstract formal description of its former
>>> reality, and then report your imagined subjective
>>> experience."
>>
>>> I hope can appreciate how any reasonable person would
>>> consider that question incoherent and even ludicrous. I hope
>>> you can also see that from my point of view, you asked me
>>> that same question.
>
>
>> What does "partly as an abstract formal description of its
>> former reality" mean?
>
> It means that programs exist as formal descriptions of real or supposed objects or processes. They describe and simulate real objects and real processes but they do not equal them.
>
>> I asked you no such thing.
>
> You did but apparently you didn't understand me well enough to realize it.

Right, I asked you the question from the point of view of a
concrete-thinking technician. This simpleton sets about building
artificial neurons from parts he buys at Radio Shack without it even
occurring to him that the programs these parts run are formal
descriptions of real or supposed objects which simulate but do not
equal the objects. When he is happy that his artificial neurons behave
just like the real thing he has his friend the surgeon, also
technically competent but not philosophically inclined, install them
in the brain of a patient rendered aphasic after a stroke. We can add
a second part to the experiment in which the technician builds another
set of artificial neurons based on clockwork nanomachinery rather than
digital circuits and has them installed in a second patient, the idea
being that the clockwork neurons do not run formal programs. You then
get to talk to the patients. Will both patients be able to speak
equally well? If so, would it be right to say that one understands
what he is saying and the other doesn't? Will the patient with the
clockwork neurons report he feels normal while the other one reports
he feels weird? Surely you should be able to observe *something*. If
you coped with the Chinese Room thought experiment but you claim the
one I have just described is incoherent or ridiculous then you are
being intellectually dishonest.



-- 
Stathis Papaioannou



More information about the extropy-chat mailing list