[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Mon Dec 14 13:11:49 UTC 2009


--- On Sun, 12/13/09, Ben Zaiboc <bbenzai at yahoo.com> wrote:

>> The challenge ... is ...to show that formal programs differ in 
>> some important way from shopping lists, some important way that 
>> allows programs to overcome the symbol grounding problem.
> 
> I've just been following this thread peripherally, but this
> caught my attention.  Are you *seriously* saying that
> you think shopping lists don't differ from programs? 

I mean that if we want to refute the position of this philosopher who goes by the name of Searle then we need to show exactly how programs overcome the symbol grounding problem.

I think everyone will agree that a piece of paper has no conscious understanding of the symbols it holds, i.e., that a piece of paper cannot overcome the symbol grounding problem. If a program differs from a piece of paper such that it can have conscious understanding the symbols it holds, as in strong AI on a software/hardware system, then how does that happen? 

> Secondly, if you don't think a program can solve the
> mysteriously difficult 'symbol grounding problem', how can a
> brain do it?  

Philosophers and cognitive scientists have some theories about how *minds* do it, but nobody really knows for certain how the physical brain does it in any sense we might duplicate. 

If it has no logical flaws, Searle's formal argument shows that however brains do it, they don't do it by running programs. 

-gts


      



More information about the extropy-chat mailing list