[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Tue Dec 22 13:53:46 UTC 2009


2009/12/22 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Mon, 12/21/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> you suggested that (a) would be the case, but then seemed to backtrack:
>
> I suggested (a) would be the case if we replaced all neurons with your programmatic neurons.
>
>> If you don't believe in a soul then you believe that at
>> least some of the neurons in your brain are actually involved in
>> producing the visual experience. It is these neurons I propose replacing
>> with
>> artificial ones that interact normally with their
>> neighbours but lack
>> the putative extra ingredient for consciousness. The aim of
>> the
>> exercise is to show that this extra ingredient cannot
>> exist, since
>> otherwise it would lead to one of two absurd situations:
>> (a) you would
>> be blind but you would not notice you were blind; or (b)
>> you would
>> notice you were blind but you would lose control of your
>> body, which
>> would smile and say everything was fine.
>
> I suppose (b) makes sense for the partial replacement scenario you want me to consider. If it seems bizarre, well then so too does the thought experiment!

The experiment involves replacing biological neurons with artificial
neurons. It's certainly no more bizarre that the CR, which is probably
physically impossible as a normal human could never do the information
processing fast enough or accurately enough to pass as a Chinese
speaker. Even today there is talk of developing neural prostheses for
people with brain lesions - look up "artificial hippocampus". I guess
the team behind that project has not so far had spectacular success or
we have heard about it, but extrapolate the technology a few decades
hence and it doesn't seem wildly implausible that the technical
problems will be overcome. The question will then be, will the
cyborgised brain have the same consciousness, feelings, semantics etc.
that a normal brain has?

If you believe that with partial brain replacement you would feel
different but behave normally, then you are proposing that it is
possible for you to think with something other than your brain. This
is because your remaining biological brain is constrained to go
through exactly the same sequence of neural firings after the
replacement as before. It's not *impossible* that your cognition is
dependent on an immaterial soul but I don't think you want to go down
this line of argument; and even Descartes thought that the soul and
the brain were always perfectly synchronised.

> And how does it in any way speak to the issue at hand? As in the title of the thread, our concern here is the symbol grounding problem in strong AI, or more generally "understanding" in S/H systems. To target Searle's argument (as you want to and which I appreciate) we need to use your nano-neuron thought experiments to somehow undermine his position that programs do not have semantics.

You claim that a S/H brain analogue would lack understanding. This
thought experiment shows that it would have understanding. If you
think the visual cortex example is missing the point then consider
repalcement of the neurons in Wernicke's area. You would claim to feel
exactly the same, you would believe that you understood language the
same as before, and you would use language appropriately as far as
anyone else could tell. If someone asks you what you had for dinner
last night you feel that you understand what he is asking, you recall
an image of last night's meal, perhaps also its taste and aroma, and
you describe all this in clear and appropriate English. And yet, you
would say (I think) that because the artificial neurons just follow an
algorithm, and syntax is not sufficient for meaning, you don't
*really* understand either the question or your answer; you just have
have the delusional belief that you understand it. But if it is
possible to be deluded about such a thing, how do you know that you
aren't deluded right now?


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list