[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Wed Dec 23 05:35:12 UTC 2009


2009/12/23 Gordon Swobe <gts_2000 at yahoo.com>:

> Well P3 is certainly open to debate. Can you show how syntax gives rise to semantics? Can you show how the man in the room who does nothing more than shuffle Chinese symbols according to syntactic rules can come to know the meanings of those symbols? If so then you've cooked Searle's goose.

The man doesn't know the meaning of the symbols, all he knows is how
to manipulate them. Neither do the neurons know the meaning of the
symbols they manipulate. However, amazingly, complex enough symbol
manipulation by neurons, electronic circuits or even men in Chinese
rooms gives rise to a system that understands the symbols. I know this
because I have a brain which at the basic level only "knows" how to
follow the laws of physics, but in so doing it gives rise to a mind
which has understanding. At first glance it seems that this may
possibly be due to some property of the substrate, but the neural
replacement experiment I keep going on about shows that duplicating
brain behaviour with a completely different substrate will also
duplicate the understanding, and this implies that it is actually the
function rather than the substance of the brain that is important.

At a more basic level, it seems clear to me that all semantics must at
bottom reduce to syntax. A child learns to associate one set of inputs
- the sound or shape of the word "dog" - with another set of inputs -
a hairy four-legged beast that barks. Everything you know is a variant
on this theme, and it's all symbol manipulation.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list