[ExI] The symbol grounding problem in strong AI

John Clark jonkc at bellsouth.net
Wed Dec 23 17:42:37 UTC 2009


On Dec 23, 2009, at 6:59 AM, Gordon Swobe wrote:
> 
> Searle wants to know what possesses some intelligent people to attribute "mind" to mere programs running on computers, programs which in the final analysis do nothing more interesting that kitchen can-openers.

Searle claims he is receptive to a rational explanation of mind but the above shows that he is not and neither are you. Any explanation, assuming its any good, is going to reduce it to a "mere" something. That's just in the nature of explanations, it's what they do. It's like a man who was greatly impressed by a spectacular magic trick but when he learned the mundane secret of its performance is unsatisfied with the explanation because it was not mystical or supernatural. 

> Can you show how syntax gives rise to semantics? 

I can just as soon as you show me that syntax and semantics have absolutely nothing to do with each other and that semantics can have only 2 values, understanding and non understanding.

> Can you show how the man in the room who does nothing more than shuffle Chinese symbols according to syntactic rules can come to know the meanings of those symbols?

What would be the point of telling you that again? Myself and others have explained that over and over but you don't rebut what we say you just repeat the same tired old question yet again.

> He's not the mystic. They are.

Searle is the one who doesn't believe in Evolution not us.

 John K Clark
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091223/90960d6b/attachment.html>


More information about the extropy-chat mailing list