[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Sat Jan 9 04:41:56 UTC 2010


2010/1/9 Gordon Swobe <gts_2000 at yahoo.com>:

>> However, Sam will truly understand what he is saying while Cram will
>> behave as if he understands what he is saying and believe that he
>> understands what he is saying, without actually
>> understanding anything. Is that right?
>
> He will behave outwardly as if he understands words but he will not "believe" anything. He will have weak AI.

The patient was not a zombie before the operation, since most of his
brain was functioning normally, so why would he be a zombie after?
Before the operation he sees that people don't understand him when he
speaks, and that he doesn't understand them when they speak. He hears
the sounds they make, but it seems like gibberish, making him
frustrated. After the operation, whether he gets the p-neurons or the
c-neurons, he speaks normally, he seems to understand things normally,
and he believes that the operation is a success as he remembers his
difficulties before and now sees that he doesn't have them.

Perhaps you see the problem I am getting at and you are trying to get
around it by saying that Cram would become a zombie. But by what
mechanism would the replacement of only a few neurons negate the
consciousness of the rest of the brain?


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list