[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Sun Jan 10 15:05:54 UTC 2010


--- On Sat, 1/9/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

>> After leaving the hospital, the patient might tell you
> he believes in Santa Claus, but he won't actually "believe"
> in it; that is, he won't have a conscious subjective
> understanding of the meaning of "Santa Claus".
> 
> He has no understanding of words before the operation, but
> he still has understanding! If he sees a dog he knows it's a dog, 

To think coherently about dogs or about anything else, one must understand words and this poor fellow cannot understand his own spoken or unspoken words or the words of others. At all.

He completely lacks understanding of words, Stathis. Suffering from complete receptive aphasia, he has no coherent thoughts whatsoever. 

We can suppose less serious aphasias if you like, but to keep our experiment pure I have assumed complete receptive aphasia. 

With b-neurons or possibly with m-neurons we can cure him. We p-neurons we can only program him to speak and behave in a way that objective observers will find acceptable, i.e., we can program him to pass the Turing test.

> But you claim that Cram will actually have no understanding of
> "dog" despite all this. That is what seems absurd: what else could it
> possibly mean to understand a word if not to use the word appropriately
> and believe you know the meaning of the word?

Although Cram uses the word "dog" appropriately after the operation, he won't believe he knows the meaning of the word, i.e., he will not understand the word "dog". If that seems absurd to you, remember that he did not understand it before the operation either. In this respect nothing has changed.


-gts




      



More information about the extropy-chat mailing list