[ExI] The symbol grounding problem in strong AI
Gordon Swobe
gts_2000 at yahoo.com
Wed Dec 16 14:10:09 UTC 2009
--- On Tue, 12/15/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
> http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html
...
> So, Searle allows that the behaviour of a neuron could be
> copied by a computer program, but that this artificial neuron
> would lack the essential ingredient for consciousness. This claim
> can be refuted with a purely analytic argument, valid independently
> of any empirical fact about the brain. The argument consists in
> considering what you would experience if part of your brain were
> replaced with artificial neurons that are functionally equivalent
> but (for the purpose of the reductio) lacking in the the essential
> ingredient of consciousness.
Glad to see you read that article.
I don't understand why you say you refuted anything with a purely analytic argument that does not depend on any empirical fact, when your argument consists of imagining an empirical fact! But that's besides the point...
It looks like you want to refute Searle's claim that although a computer simulation of a brain is possible, such a simulation will not have intentionality/semantics. It won't on Searle's view have any more semantics than does a computer simulation of anything have anything. A simulation is, umm, a simulation.
I once wrote a gaming application in C++ that contained an imaginary character. Because the character interacted in complex ways with the human player in spoken language (it used voice recognition) I found it handy to create an object called "brain" in my code to represent the character's thought processes. Had I had the knowledge and the time, I could have created a complete computer simulation of a real brain.
Assume I had done so. Did my character have understanding of the words it manipulated? Did the program itself have such understanding? In other words, did either the character or the program overcome the symbol grounding problem?
No and No and No. I merely created a computer simulation in which an imaginary character with an imaginary brain pretended to overcome the symbol grounding problem. I did nothing more interesting than does a cartoonist who writes cartoons for your local newspaper.
-gts
More information about the extropy-chat
mailing list