[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Wed Dec 23 13:18:10 UTC 2009


2009/12/23 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Wed, 12/23/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> However, amazingly, complex enough symbol manipulation by neurons,
>> electronic circuits or even men in Chinese rooms gives rise to a system
>> that understands the symbols.
>
> Or perhaps nothing "amazing" happens. Instead of believing in magic, I find easier to accept that the computationalist theory of mind is simply incoherent. It does not explain the facts.

So you find the idea that in some unknown way chemical reactions cause
mind not particularly amazing, while the same happening with electric
circuits is obviously incredible?

>> I know this because I have a brain which at the basic level only
>> "knows" how to follow the laws of physics, but in so doing it gives
>> rise to a mind which has understanding.
>
> Nobody has suggested we need violate any laws of physics to obtain understanding. The suggestion is that the brain must do something in addition to or instead of running formal programs. Searle's work brings us one step closer to understanding what's really going on in the brain.

A computer only runs a formal program in the mind of the programmer. A
computer undergoes internal movements according to the laws of
physics, which movements can (incidentally) be described
algorithmically. This is the most basic level of description. The
programmer comes along and gives a chunkier, higher level description
which he calls a program, and the end user, blind to either the
electronics or the program, describes the computer at a higher level
still. But how you describe it does not change what the computer
actually does or how it does it. The program is like a plan to help
the programmer figure out where to place the various parts of the
computer in relation to each other so that they will do a particular
job. Both the computer and the brain go clickety-clack, clickety-clack
and produce similar intelligent behaviour. The computer's parts were
deliberately arranged by the programmer in order to bring this result
about, whereas the brain's parts were arranged in a spontaneous and
somewhat haphazard way by nature, making it more difficult to see the
algorithmic pattern (although it must be there, at least at the level
of basic physics). In the final analysis, it is this difference
between them that convinces you the computer doesn't understand what
it's doing and the brain does.

>> At a more basic level, it seems clear to me that all
>> semantics must at bottom reduce to syntax.
>> A child learns to associate one set of inputs
>> - the sound or shape of the word "dog" - with another set
>> of inputs - a hairy four-legged beast that barks. Everything you
>> know is a variant on this theme, and it's all symbol manipulation.
>
> The child learns the meaning of the sound or shape "dog", whereas the program merely learns to associate the form of the word "dog" with an image of a dog. While their behaviors might match, the former has semantics accompanying its behavior, the latter does not (or if it does then we need to explain how).

What is it to learn the meaning of the word "dog" if not to associate
its sound or shape with an image of a dog?

Anyway, despite the above, and without any help from Searle, it might
still seem reasonable to entertain the possibility that there is
something substrate-specific about consciousness, and fear that if you
agree to upload your brain the result would be a mindless zombie. That
is where the partial brain replacement (eg. of the visual cortex or
Wernicke's area) thought experiment comes into play, proving that if
you duplicate the behaviour of neurons, you must also duplicate the
consciousness/qualia/intentionality/understanding. If you disagree
that it proves this, please explain why you disagree, and what you
think would actually happen if such a replacement were made. Perhaps
you could also ask the people on the other discussion group you have
mentioned.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list