[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Fri Jan 8 23:26:45 UTC 2010


--- On Fri, 1/8/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

 > I think I see what you mean now. The generic p-neurons
> can't have any information about language pre-programmed, so the patient
> will have to learn to speak again. However, the same problem will occur
> with the c-neurons. 

Replacement with c-neurons would work in straightforward manner even supposing the patient might need to relearn language. But with p-neurons he will have no experience of understanding words even after his surgeon programs them. 

And because the experience of understanding words affects the behavior of neurons associated with that understanding, our surgeon/programmer of p-neurons faces a tremendous challenge, one that his c-neuron replacing colleagues needn't face.

> However, Sam will truly understand what he is saying while Cram will 
> behave as if he understands what he is saying and believe that he 
> understands what he is saying, without actually
> understanding anything. Is that right?

He will behave outwardly as if he understands words but he will not "believe" anything. He will have weak AI.

-gts



      



More information about the extropy-chat mailing list