[ExI] The symbol grounding problem in strong AI

scerir scerir at libero.it
Tue Dec 29 17:36:51 UTC 2009


[-gts] To borrow a phrase popularized by a philosopher by the name of Thomas 
Nagel, who famously wrote an essay titled _What is it like to be a bat?_, there 
exists something "it is like" to mentally solve or understand a mathematical 
equation. Computers do math well, but you can't show me how they could possibly 
know what it's like. 

#

There are robot-scientists http://www.wired.com/wiredscience/2009/12/download-
robot-scientist/ and smart softwares. I do not know if they are conscious or 
intentional. I'm not expert in "semantics", but it seems to me that every 
meaning is contextual, or inten*s*ional. For "semantics", in the context of 
programming languages, see here http://tinyurl.com/yjdpkry . Also, there are 
several examples (i.e. quantum mechanics, its principles, its rules) showing 
that scientists cannot get any idea, any mental representation of what they 
write. They do understand their equations, but they do not understand the 
meaning.



More information about the extropy-chat mailing list