[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Sat Jan 2 16:46:20 UTC 2010


--- On Sat, 1/2/10, John Clark <jonkc at bellsouth.net> wrote:

> There is no such thing as strong AI
> research, there is just AI research. Nobody is doing
> Artificial Consciousness research because claiming success
> would be just too easy. 

Stathis and I engage in such research on this list, even as you watch and participate. 

> Because a ham sandwich is a noun and a photo of one is a very different
> noun and consciousness is not even noun at all.

My dictionary calls it a noun. 

Stathis argues not without reason that if we can compute the brain then computer simulations of brains should have intentionality. I argue that even if we find a way to compute the brain, it does not follow that a simulation of it would have intentionality any more than it follows that a computer simulation of a ham sandwich would taste like a ham sandwich, or that a computer simulation of a waterfall would make a computer wet. Computer simulations of things do not equal the things they simulate.

I recall learning of a tribe of people in the Amazon forest or some such place that had never seen cameras. After seeing their photos for the first time, they came to fear them on the grounds that these amazing simulations of themselves captured their spirits. Not only did these naive people believe in spirits, they must also have believed that simulations of things somehow equal the things they simulate.

-gts



      



More information about the extropy-chat mailing list