[ExI] The symbol grounding problem in strong AI

John Clark jonkc at bellsouth.net
Sun Dec 20 16:47:19 UTC 2009


On Dec 19, 2009,  Gordon Swobe wrote:

> In other words you want to think that if something causes X to do Y then we can assume X actually knows how to do Y.

To a limited extent yes, of course the more impressive Y is the more powerful a mind we can expect behind it, and putting a punch card in a column isn't very impressive. Still, it is a specific task carried out by reading and understanding the meaning of a symbol. You think there is a sharp divide between mind and no mind, I believe that like most things in life there is no sharpness to be found, there is only a blob.

 John K Clark 
>  

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091220/1bd8eb79/attachment.html>


More information about the extropy-chat mailing list