[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Thu Dec 17 15:38:41 UTC 2009


--- On Thu, 12/17/09, Ben Zaiboc <bbenzai at yahoo.com> wrote:

>> Assume I had done so. Did my character have
>> understanding of the words it manipulated? Did the program 
>> itself have such understanding? In other words, did either the
>> character or the program overcome the symbol grounding problem?
>> 
>> No and No and No. I merely created a computer simulation in
>> which an imaginary character with an imaginary brain
>> pretended to overcome the symbol grounding problem. I
>> did nothing more interesting than does a cartoonist who
>> writes cartoons for your local newspaper.

> It's "Yes", "Yes", and "What symbol grounding problem?"

You'll understand the symbol grounding problem if and when you understand my last sentence, that I did nothing more interesting than does a cartoonist.
 
> If your character had a brain, and it was a complete
> simulation of a biological brain, then how could it not have
> understanding? 

Because it's just a program and programs don't have semantics.

> This: "A simulation is, umm, a simulation." is the
> giveaway, I think.  

The point is that computer simulations are just that: simulations. 

> Correct me if I'm wrong, but it seems that you think there
> is some magical functional property of a physical object
> that a model of it, *no matter how detailed*, cannot
> possess?

I don't claim that physical objects do anything magical. I do however claim that computer simulations of physical objects do not. 

-gts




      



More information about the extropy-chat mailing list