[ExI] The symbol grounding problem in strong AI

Eugen Leitl eugen at leitl.org
Mon Dec 14 13:25:19 UTC 2009


On Mon, Dec 14, 2009 at 05:11:49AM -0800, Gordon Swobe wrote:

> I mean that if we want to refute the position of this philosopher who goes by the name of Searle then we need to show exactly how programs overcome the symbol grounding problem.
> 
> I think everyone will agree that a piece of paper has no conscious understanding of the symbols it holds, i.e., that a piece of paper cannot overcome the symbol grounding problem. If a program differs from a piece of paper such that it can have conscious understanding the symbols it holds, as in strong AI on a software/hardware system, then how does that happen? 
> 
> Philosophers and cognitive scientists have some theories about how *minds* do it, but nobody really knows for certain how the physical brain does it in any sense we might duplicate. 
> 
> If it has no logical flaws, Searle's formal argument shows that however brains do it, they don't do it by running programs. 

*plonk*



More information about the extropy-chat mailing list