[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Thu Dec 24 09:18:03 UTC 2009


2009/12/24 Gordon Swobe <gts_2000 at yahoo.com>:

> Not exactly, but close. Brains contain something like electric circuits but I still find it incredible that a mind that runs only on programs can have everything biological minds have. Again, I find the computationalist theory of mind incredible.
>
>> A computer only runs a formal program in the mind of the
>> programmer.
>
> Where did you buy your computer? I built mine, and I can tell you it runs formal programs in RAM. :)

What is it about the makeup of your computer that marks it as
implementing formal programs? Because you built it you can see certain
patterns in it which represent the programs, but this is just you
superimposing an interpretation on it. It is no more a physical fact
about the computer than interpreting constellations as looking like
animals is a physical fact about stars.

You believe that programs can't give rise to minds, but the right kind
of physical activity can. Would you then object to the theory that it
isn't the program that gives rise to the computer's mind, but the
physical activity that takes place during the program's
implementation?

>> What is it to learn the meaning of the word "dog" if not to
>> associate its sound or shape with an image of a dog?
>
> Both you and the computer make that association, and both of you act accordingly. But only you know about it, i.e, only you know the meaning.

If I don't know the meaning of a symbol that is because I don't know
what object to associate it with. Once I make the association, I know
the meaning. I don't see how I could coherently claim that I can
correctly and consciously make the association but not know the
meaning.

> We can take your experiment deeper, and instead of creating a program driven nano-neuron to substitute for the natural neuron, we keep everything about the natural neuron and replace only the nucleus. This neuron will appear even more natural than yours. Now we take it another step and keep the nucleus. We create artificial program-driven DNA (whatever that might look like) to replace the DNA inside the nucleus. And so on. In the limit we will have manufactured natural program-less neurons.
>
> I don't know if Searle (or anyone) has considered the ramifications of this sort of progression that I describe in terms of Searle's philosophy, but it seems to me that on Searle's view the person's intentionality would become increasingly apparent to him as his brain became driven less by abstract formal programs and more by natural material processes.
>
> This also leaves open the possibility that your more basic nano-neurons, those you've already supposed, would not deprive the subject completely of intentionality. Perhaps your subject would become somewhat dim but not completely lose his grip on reality.

I think you've missed the main point of the thought experiment, which
is to consider the behaviour of the normal neurons in the brain. We
replace Wernicke's area with an artificial analogue that is as
unnatural, robotlike and (it is provisionally assumed) mindless as we
can possibly make it. The only requirement is that it masquerade as
normal for the benefit of the neural tissue with which it interfaces.
This subject should have no understanding of language, but not only
will he behave as if he has understanding, he will also believe that
he has understanding and all his thoughts (except those originating in
Wernicke's area) will be exactly the same as if he really did have
understanding. He will thus be able to read a sentence, comment on it,
have an appropriate emotional response to it, paint a picture or write
a poem about it, and everything else exactly the same as if he had
real understanding. These will not just be the behaviours of a
mindless zombie, but based on genuine subjective experiences. Is it
possible that the subject lacks such an important component of his
consciousness as language or, in my previous example, vision, but
doesn't even realise? If so, how do you know that you aren't aphasic
or blind without realising it now? And what advantage would having
true language or vision bring if it makes no objective or subjective
difference to the subject or those with whom he comes into contact?

The conclusion from the absurdity of the alternatives is that if it is
possible to duplicate the behaviour of neural tissue, then all the
subjective experiences associated with the neural tissue will also be
duplicated.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list