[ExI] The symbol grounding problem in strong AI
stathisp at gmail.com
Tue Jan 5 09:23:44 UTC 2010
2010/1/5 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Mon, 1/4/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>> Moreover, you seem to be saying that there is only one type of c-neuron
>> that could fill the shoes of the original b-neuron, although
>> presumably there are different m-neurons that could give rise to this
>> c-neuron. Is that right?
> 1. I think b-neurons work as c-neurons in the relevant parts of the brain.
> 2. I think all p-neurons work as ~c-neurons in the relevant parts of the brain.
> 3. I annoy Searle, but do not I think fully disclaim his philosophy, by hypothesizing that some possible m-neurons work like c-neurons.
> Does that answer your question?
Is there only one type of c-neuron or is it possible to insert
m-neurons which, though they are functionally identical to b-neurons,
result in a different kind of consciousness?
>> Suppose the m-neuron (which is a c-neuron) contains a
>> mechanism to open and close sodium channels depending on the
>> transmembrane potential difference. Would changing from an analogue
>> circuit to a digital circuit for just this mechanism change the neuron
>> from a c-neuron to a ~c-neuron?
> Philosophically, yes. In practical sense? Probably not in any detectable way. But you've headed down a slippery slope that ends with describing real natural brains as digital computers. I think you want to go there, (and speaking as an extropian I certainly don't blame you for wanting to) and if so then perhaps we should just cut to the chase and go there to see if the idea actually works.
Philosophy has to give an answer that's in accordance with what would
actually happen, what you would actually experience, otherwise it's
worse than useless. The discussion we have been having is an example
of a philosophical problem with profound practical consequences. If I
get a new super-fast computerised brain and you're right I would be
killing myself, whereas if I'm right I would become an immortal
super-human. I think it's important to be sure of the answer before
You shouldn't dismiss the slippery slope argument so quickly. Either
you suddenly become a zombie when a certain proportion of your neurons
internal workings are computerised or you don't. If you don't, then
the option is that you don't become zombified at all or that you
become zombified in proportion to how much of the neurons is
computerised. Either sudden or gradual zombification seems implausible
to me. The only plausible alternative is that you don't become
zombified at all.
>>> No, he does not "actually" believe anything. He merely
>> reports that he feels normal and reports that he
>> understands. His surgeon programmed all p-neurons such that
>> he would pass the TT and report healthy intentionality,
>> including but not limited to p-neurons in Wernicke's area.
>> This is why the experiment considers *partial* replacement.
>> Even before the operation Cram is not a zombie: despite not
>> understanding language he can see, hear, feel, recognise people and
>> objects, understand that he is sick in hospital with a stroke, and
>> he certainly knows that he is conscious. After the operation he has the
>> same feelings, but in addition he is pleased to find that he
>> now understands what people say to him, just as he remembers
>> before the stroke.
> I think that after the initial operation he becomes a complete basket-case requiring remedial surgery, and that in the end he becomes a philosophical zombie or something very close to one. If his surgeon has experience then he becomes a zombie or near zombie on day one.
I don't understand why you say this. Perhaps I haven't explained what
I meant well. The p-neurons are drop-in replacements for the
b-neurons, just like pulling out the LM741 op amps in a piece of audio
equipment and replacing them with TL071's. The TL071 performs the same
function as the 741 and has the same pin-out, so the equipment will
function just the same, even though the internal circuitry of the two
IC's is quite different. You need know nothing at all about the
insides of op amps to use them or find replacements for them in a
circuit: as long as the I/O behaviour is the same, they one could be
driven by vacuum tubes and the other by little demons and the circuit
would work just fine in both cases.
It's the same with the p-neurons. The manufacturer guarantees that the
I/O behaviour of a p-neuron is identical to that of the b-neuron that
it replaces, but that's all that is guaranteed: the manufacturer
neither knows nor cares about consciousness, understanding or
Now, isn't it clear from this that Cram must behave normally and must
(at least) have normal experiences in the parts of his brain which
aren't replaced, given that he wasn't a zombie before the operation?
If Cram has neurons in his language centre replaced then he must be
able to communicate normally and respond to verbal input normally in
every other way: draw a picture, laugh with genuine amusement at a
joke, engage in philosophical debate. He must also genuinely believe
that he understands everything, since if he didn't he would tell us.
So you are put in a position where you have to maintain that Cram
behaves as if he has understanding and genuinely believes that he has
understanding, while in fact he doesn't understand anything. Is this
More information about the extropy-chat