[ExI] The symbol grounding problem in strong AI

John Clark jonkc at bellsouth.net
Thu Dec 31 05:13:38 UTC 2009


On Dec 30, 2009,  Gordon Swobe wrote:

> Simulated flames don't *burn* simulated objects, John,

I assume you believe the words "simulated flames" has a meaning otherwise you wouldn't be talking about them, so I would think such a thing growing would also be understandable to you, but apparently not. I know what I mean by the term but I can't imagine what you do. And if simulated objects couldn't effect each other why in the world would scientists spend so much time making such computer programs?

And real flames don't *burn* either, they just obey the laws of chemistry.

> simulated minds can't observe themselves

So you have decreed many times. You ask us to ignore a century and a half of hard evidence on how Evolution works simply on your authority, and then you accuse us of being slaves to religious doctrine. There is plenty of evidence that Charles Darwin was right and there is none that you are.  You both can't be right.

> any more than can the simulated goldfish that appear to swim on some screen-savers.

That's probably because most screen-savers I've seen don't have minds. I refuse to use the term "simulated minds" because it is as nonsensical as "simulated arithmetic". 

 John K Clark


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091231/e3925009/attachment.html>


More information about the extropy-chat mailing list