[ExI] The symbol grounding problem in strong AI

John Clark jonkc at bellsouth.net
Mon Dec 21 23:38:18 UTC 2009


On Dec 21, 2009, at 9:07 AM, Gordon Swobe wrote:
> 
> The experiment in the CRA shows that programs don't have it because the man representing the program can't grok Chinese

The man represents the program? What utter crap. The in the idiotic Chinese room world the silly man doesn't even represent something important like a if then statement, at best the man represents a very specific and thus dull thing like let let k =3.

> even if computers *did* have consciousness, they still would have no understanding the meanings of the symbols contained in their programs.

There may be stupider statements than the one that can be seen above, but I am unable to come up with an example of one, at least right at this instant off the top of my head.

 John K Clark

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091221/9ea5c6d2/attachment.html>


More information about the extropy-chat mailing list