[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Wed Dec 23 11:32:25 UTC 2009


--- On Wed, 12/23/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> However, amazingly, complex enough symbol manipulation by neurons, 
> electronic circuits or even men in Chinese rooms gives rise to a system 
> that understands the symbols.

Or perhaps nothing "amazing" happens. Instead of believing in magic, I find easier to accept that the computationalist theory of mind is simply incoherent. It does not explain the facts.

> I know this because I have a brain which at the basic level only
> "knows" how to follow the laws of physics, but in so doing it gives 
> rise to a mind which has understanding. 

Nobody has suggested we need violate any laws of physics to obtain understanding. The suggestion is that the brain must do something in addition to or instead of running formal programs. Searle's work brings us one step closer to understanding what's really going on in the brain.

> At a more basic level, it seems clear to me that all
> semantics must at bottom reduce to syntax. 
> A child learns to associate one set of inputs
> - the sound or shape of the word "dog" - with another set
> of inputs - a hairy four-legged beast that barks. Everything you 
> know is a variant on this theme, and it's all symbol manipulation.

The child learns the meaning of the sound or shape "dog", whereas the program merely learns to associate the form of the word "dog" with an image of a dog. While their behaviors might match, the former has semantics accompanying its behavior, the latter does not (or if it does then we need to explain how).


-gts


      



More information about the extropy-chat mailing list