[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Sat Dec 19 23:47:13 UTC 2009


2009/12/20 Gordon Swobe <gts_2000 at yahoo.com>:

>> It's important that you consider first the case of
>> *partial* replacement, eg. all of your visual cortex but the rest of
>> the brain left intact.
>
> I based all my replies, each with which you disagreed, on a complete replacement because the partial just seems too speculative to me. (The complete replacement is extremely speculative as it is!)

It's a thought experiment, so you can do anything as long as no
physical laws are broken.

> I simply don't know (nor do you or Searle) what role the neurons in the visual cortex play in conscious awareness. Do they only play a functional role as I think you suppose, as mere conduits of visual information to consciousness, or do they also play a role in the conscious experience? I don't know and I don't think anyone does.

If you don't believe in a soul then you believe that at least some of
the neurons in your brain are actually involved in producing the
visual experience. It is these neurons I propose replacing with
artificial ones that interact normally with their neighbours but lack
the putative extra ingredient for consciousness. The aim of the
exercise is to show that this extra ingredient cannot exist, since
otherwise it would lead to one of two absurd situations: (a) you would
be blind but you would not notice you were blind; or (b) you would
notice you were blind but you would lose control of your body, which
would smile and say everything was fine.

Here is a list of the possible outcomes of this thought experiment:

(a) as above;
(b) as above;
(c) you would have normal visual experiences (implying there is no
special ingredient for consciousness);
(d) there is something about the behaviour of neurons which is not
computable, which means even weak AI is impossible and this thought
experiment is impossible.

I'm pretty sure that is an exhaustive list, and one of (a) - (d) has
to be the case.

I favour (c). I think (a) is absurd, since if nothing else, having an
experience means you are aware of having the experience. I think (a)
is very unlikely because it would imply that you are doing your
thinking with an immaterial soul, since all your neurons would be
constrained to behave normally.

I think (d) is possible, but unlikely, and Searle agrees. There is
nothing so far in physics that has been proved to be uncomputable, and
no reason to think that it should be hiding inside neurons.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list