[ExI] The symbol grounding problem in strong AI
stathisp at gmail.com
Tue Jan 5 11:32:31 UTC 2010
2010/1/5 Gordon Swobe <gts_2000 at yahoo.com>:
>> But how would we ever distinguish the NCC from something
>> else that just had an effect on general neural function?
>> If hypoxia causes loss of consciousness, that doesn't mean that
>> the NCC is oxygen.
> We know ahead of time that the presence of oxygen will play a critical role.
> Let us say we think neurons in brain region A play the key role in consciousness. If we do not shut off the supply of oxygen but instead shut off the supply of XYZ to region A, and the patient loses consciousness, we then have reason to say that oxygen, XYZ and the neurons in region A play important roles in consciousness. We then test many similar hypotheses with many similar experiments until we have a complete working hypothesis to explain the NCC.
But you claim that it is possible to make p-neurons which function
like normal neurons but, being computerised, lack the NCC, and putting
these neurons into region A as replacements will not cause the patient
to fall to the ground unconscious. So if you see in your experiments
the patient losing consciousness, or any other behavioural change,
that must be due to something computable, and therefore not the NCC.
The essential function of the NCC is to prevent the patient from being
a zombie, and you can never observe this in an experiment.
More information about the extropy-chat