[ExI] The symbol grounding problem in strong AI.

John Clark jonkc at bellsouth.net
Fri Dec 25 16:00:39 UTC 2009


On Dec 24, 2009, at 9:15 PM, Gordon Swobe wrote:

> I have no concerns about how "robotlike" you might make your artificial neurons. I don't assume that natural neurons do not also behave robotically. I do however assume that natural neurons do not run formal programs like those running now on your computer.

Then I have no idea what you mean by "robotically" and would be willing to bet money that you don't either.

> If they do then I must wonder who wrote them.

Well naturally you'd wonder about who wrote those programs, because like Searle you are ignorant of things that any good High School biology student knows, and can pretend, or perhaps doesn't even know, that a book explaining exactly how those programs came to be was written 150 years ago. 

Searle sits in his armchair, a man who has never once dirtied his hands performing an actual experiment and concludes that X cannot be true despite a huge amount of evidence gathered over the centuries indicating that it MUST be true. He say's "I personally don't understand how X could be and the only possible explanation for my lack of understanding is that X is in fact not true, and to hell with that titanic pile of empirical confirmation. I am smarter than the evidence, if I can't find the answer then I know the answer does not exist".

As I said before this is the sort of thing that gives philosophy a bad name and is the reason that no great philosophical breakthrough has ever come from philosophers. 

 John K Clark  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091225/e7e00f45/attachment.html>


More information about the extropy-chat mailing list