[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Mon Dec 21 06:51:09 UTC 2009


2009/12/21 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Sun, 12/20/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> But it seems that you and Searle are saying that the CR
>> lacks understanding *because* the man lacks understanding of
>> Chinese, whereas the brain, with completely dumb components, has
>> understanding.
>
> The brain has understanding, yes, but Searle makes no claim about the dumbness or lack thereof of its components.  You added that to his argument.
>
> He starts with the self-evident axiom that brains have understanding and then asks if Software/Hardware systems can ever have it too. He concludes they cannot based on his logical argument, which I've posted here several times.
>
>> So you are penalising the CR because it has smart
>> components and because what it does has an algorithmic pattern.
>
> He penalizes the CR only because it runs a formal program, and nobody has shown how programs can have minds capable of understanding the symbols they manipulate. In other words, nobody has shown his formal argument false. If somebody has seen it proved false then point me to it.
>
> I see people here like Eugen who scoff but who offer no evidence that Searle's logic fails. Is it just an article of religious faith on ExI that programs have minds? And if it is, and if we cannot explain how it happens, then should we adopt the mystical philosophy that everything has mind merely to protect the notion that programs do or will?
>
>> By this reasoning, if neurons had their own separate rudimentary
>> intelligence and if someone could see a pattern in the brain's
>> functioning to which the term "algorithmic" could be applied, then
>> the brain would lack understanding also.
>
> No, Searle argues that even if we can describe brain processes algorithmically, those algorithms running on a S/H system would not result in understanding; that it's not enough merely to simulate a brain in software running on a computer.
>
> S/H systems are not hardware *enough*.

But a S/H system is a physical system, like a brain. You claim that
the computer lacks something the brain has: that it is only syntactic,
and syntax does not entail semantics. But even if it is true that
syntax does not entail semantics, how can you be sure that the brain
has the extra ingredient for semantics and the computer does not, and
how does the CR argument show this? You've admitted that it isn't
because the the parts of the CR have components with independent
intelligence and you've admitted that it isn't because the operation
of the CR has an algorithmic description and that of the brain does
not. What other differences between brains computers are there which
are illustrated by the CRA? (Don't say that the brain has
understanding while the computer or CR does not: that is the thing in
dispute).

Although the CRA does not show that computers can't be conscious, it
would still seem possible that there is some substrate-specific
special ingredient which a computer behaving like a brain lacks, as a
result of which the computer would be unconscious or at least
differently conscious. But Chalmer's "fading qualia" argument
constitutes a decisive refutation of such an idea. I cut and paste
from my previous post. Searle favours alternative (b); you suggested
that (a) would be the case, but then seemed to backtrack:

If you don't believe in a soul then you believe that at least some of
the neurons in your brain are actually involved in producing the
visual experience. It is these neurons I propose replacing with
artificial ones that interact normally with their neighbours but lack
the putative extra ingredient for consciousness. The aim of the
exercise is to show that this extra ingredient cannot exist, since
otherwise it would lead to one of two absurd situations: (a) you would
be blind but you would not notice you were blind; or (b) you would
notice you were blind but you would lose control of your body, which
would smile and say everything was fine.

Here is a list of the possible outcomes of this thought experiment:

(a) as above;
(b) as above;
(c) you would have normal visual experiences (implying there is no
special ingredient for consciousness);
(d) there is something about the behaviour of neurons which is not
computable, which means even weak AI is impossible and this thought
experiment is impossible.

I'm pretty sure that is an exhaustive list, and one of (a) - (d) has
to be the case.

I favour (c). I think (a) is absurd, since if nothing else, having an
experience means you are aware of having the experience. I think (a)
is very unlikely because it would imply that you are doing your
thinking with an immaterial soul, since all your neurons would be
constrained to behave normally.

I think (d) is possible, but unlikely, and Searle agrees. There is
nothing so far in physics that has been proved to be uncomputable, and
no reason to think that it should be hiding inside neurons.



-- 
Stathis Papaioannou



More information about the extropy-chat mailing list