[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Sat Dec 19 05:13:06 UTC 2009


--- On Fri, 12/18/09, Will Steinberg <asyluman at gmail.com> wrote:

> Right now we have a man who processes and produces
> syntactical Chinese inside a box.  Arguing that he is not
> conscious is like arguing that the language center of the
> brain is (correctly) not conscious.  

Nobody has ever argued that the man has no consciousness. You must have some other Chinese guy in mind. 

> To accurately and verifiably produce human results, the machine must 
> have a memory, i.e. a symbol grounding area.  

Symbol grounding involves comprehension of the meanings of words and symbols, not their mere storage in memory.

Simply stated, programs manipulate symbols but they have no way to know what the symbols mean, unless programs somehow have or cause minds. And that's what the brouhaha is all about. 

> I don't see how people can talk for so long about a
> limited, flawed thought experiment

I happen to agree the experiment has a flaw, though not for the reason you think. Fortunately for Searle, his argument does not depend on the thought experiment. 
 

-gts




      



More information about the extropy-chat mailing list