[ExI] The symbol grounding problem in strong AI
stathisp at gmail.com
Sun Jan 10 01:16:12 UTC 2010
2010/1/10 Gordon Swobe <gts_2000 at yahoo.com>:
>> The patient was not a zombie before the operation, since
>> most of his brain was functioning normally, so why would he be a zombie
> To believe something one must have an understanding of the meaning of the thing believed in, and I have assumed from the beginning of our experiment that the patient presents with no understanding of words, i.e., with complete receptive aphasia from a broken Wernicke's. I don't believe p-neurons will cure his aphasia subjectively, but I think his surgeon will eventually succeed in programming him to behave outwardly like one who understands words.
> After leaving the hospital, the patient might tell you he believes in Santa Claus, but he won't actually "believe" in it; that is, he won't have a conscious subjective understanding of the meaning of "Santa Claus".
He has no understanding of words before the operation, but he still
has understanding! If he sees a dog he knows it's a dog, he knows if
it's a friendly dog or a vicious dog to be avoided, he knows that dogs
have to eat and how to open a can of dog food, and so on - even though
the word "dog" is incomprehensible to him. After the operation,
whether it's Cram with the p-neurons or Sam with the c-neurons, when
he hears the word "dog" he will get an image of a dog in his head, and
he will think, "that must be what people meant when they were making
sounds and pointing to a dog before!" If he is asked "how many legs
does a dog which has lost one of its legs have?" he will get an image
of a dog hobbling about on three legs and answer, "three"; and he will
remember when he was a child and his own dog was run over by a car and
lost one of its legs. So his behaviour in relation to language will be
exactly the same whether he got the p-neurons or the c-neurons, and
his cognitions, feelings, beliefs and understanding at least in the
normal part of his brain will also be the same in either case. But you
claim that Cram will actually have no understanding of "dog" despite
all this. That is what seems absurd: what else could it possibly mean
to understand a word if not to use the word appropriately and believe
you know the meaning of the word? That's all you or I can claim at the
moment; how do we know we don't have a zombified language centre?
>> Before the operation he sees that people don't understand
>> him when he speaks, and that he doesn't understand them when they
>> speak. He hears the sounds they make, but it seems like gibberish, making
>> him frustrated. After the operation, whether he gets the
>> p-neurons or the c-neurons, he speaks normally, he seems to understand
>> things normally, and he believes that the operation is a success as he
>> remembers his difficulties before and now sees that he doesn't have
> Perhaps he no longer feels frustrated but still he has no idea what he's talking about!
He only *thinks* he knows what he is talking about and *behaves* as if
he knows what he is talking about.
>> Perhaps you see the problem I am getting at and you are
>> trying to get around it by saying that Cram would become a zombie.
> I have only this question unanswered in my mind: "How much more complete of a zombie does Cram become as a result of the surgeon's long and tedious process of reprogramming his brain to make him seem to function normally despite his inability to experience understanding? When the surgeon finally finishes with him such that he passes the Turing test, will the patient even know of his own existence?"
Why do you think the surgeon needs to do anything to the rest of his
brain? The p-neurons by definition accept input from the auditory
cortex, process it and send output to the rest of the brain exactly
the same as the c-neurons do. That's their one and only task, and the
surgeon's task is to install them in the right place causing as little
damage to the rest of the brain as possible. And if the p-neurons
duplicate the I/O behaviour of c-neurons, the behaviour of the rest of
the brain and the person as a whole must be the same. It must! Are you
still trying to say that the p-neurons *won't* be able to duplicate
the I/O behaviour of the c-neurons due to lacking understanding? Then
you have to say that p-neurons (zombie or weak AI neurons) are
impossible, that there is something non-algorithmic about the
behaviour of neurons. But you seem very reluctant to agree to this.
Instead, you put yourself in a position where you have to say that
Cram lacks understanding, but behaves as if he has understanding and
believes that he has understanding; in which case, we could all be
Cram and not know it.
More information about the extropy-chat