[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Sun Jan 10 01:29:51 UTC 2010


2010/1/10 Gordon Swobe <gts_2000 at yahoo.com>:

> But the computer simulation of it won't have consciousness any more than will a simulation of an ice cube have coldness. Computer simulations of things do not equal the things they simulate. (I wish I had a nickel for every time I've said that here :-)

But the computer simulation can drive a robot to behave like the thing
it is simulating, and this robot can be installed in place of part of
the brain. The result must be (*must* be; I wish I had 5c for every
time I've said that here) that the person with the cyborgised brain
behaves normally and believes that everything is normal. So either you
must allow that it is coherent to speak of a pseudo-understanding
which is subjectively and objectively indistinguishable from true
understanding, or you must admit that the original premise, that the
robot part lacks understanding, is false. The only other way out is to
deny that it is possible to make such robot parts because there is
something about brain physics which is not computable.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list