[ExI] The symbol grounding problem in strong AI

Ben Zaiboc bbenzai at yahoo.com
Thu Dec 17 13:15:55 UTC 2009


> From: Gordon Swobe <gts_2000 at yahoo.com> declared:

> It looks like you want to refute Searle's claim that
> although a computer simulation of a brain is possible, such
> a simulation will not have intentionality/semantics. It
> won't on Searle's view have any more semantics than does a
> computer simulation of anything have anything. A simulation
> is, umm, a simulation.
> 
> I once wrote a gaming application in C++ that contained an
> imaginary character. Because the character interacted in
> complex ways with the human player in spoken language (it
> used voice recognition) I found it handy to create an object
> called "brain" in my code to represent the character's
> thought processes. Had I had the knowledge and the time, I
> could have created a complete computer simulation of a real
> brain.
> 
> Assume I had done so. Did my character have understanding
> of the words it manipulated? Did the program itself have
> such understanding? In other words, did either the character
> or the program overcome the symbol grounding problem?
> 
> No and No and No. I merely created a computer simulation in
> which an imaginary character with an imaginary brain
> pretended to overcome the symbol grounding problem. I did
> nothing more interesting than does a cartoonist who writes
> cartoons for your local newspaper.

Why do you say No No and No?
It's "Yes", "Yes", and "What symbol grounding problem?"

If your character had a brain, and it was a complete simulation of a biological brain, then how could it not have understanding? How could it fail to have every single functional property of a biological brain?

This: "A simulation is, umm, a simulation." is the giveaway, I think.  

Correct me if I'm wrong, but it seems that you think there is some magical functional property of a physical object that a model of it, *no matter how detailed*, cannot possess?

There's a name for this kind of thinking.

Ben Zaiboc


      



More information about the extropy-chat mailing list