[ExI] How not to make a thought experiment (was: How to ground a symbol)

John Clark jonkc at bellsouth.net
Mon Feb 1 17:04:58 UTC 2010


On Jan 31, 2010, Gordon Swobe wrote:

> Let me know what you think.
> http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_robot_reply.php

More of the same. You ask us to imagine a room too large to fit into the observable universe and then say that it acts intelligently but "obviously" it doesn't understand anything. You just refuse to consider two possibilities:

1)  That you don't fully understand understanding as well as you think you do.
2)  Even if you don't understand how it could understand the room could still understand. 

In fact if Darwin is right (and there is an astronomical amount of evidence that he is) then that room MUST have consciousness despite your or my lack of comprehension of the mechanics of it all. And even if Darwin is not right every one of your arguments against consciousness existing in a robot could just as easily be used to argue against consciousness existing in your fellow human beings; but for some reason you seem unenthusiastic in pursuing that line of thought.

 John K Clark


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20100201/88c691d5/attachment.html>


More information about the extropy-chat mailing list