[ExI] The symbol grounding problem in strong AI
Gordon Swobe
gts_2000 at yahoo.com
Tue Dec 15 22:25:32 UTC 2009
--- On Tue, 12/15/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
> ... the neighbouring neurons *must*
> respond in the same way with the artificial neurons in place as
> with the original neurons.
Not so. If you want to make an argument along those lines then I will point out that an artificial neuron must behave in exactly the same way to external stimuli as does a natural neuron if and only if the internal processes of that artificial neuron exactly matches those of the natural neuron. In other words, we can know for certain only that natural neurons (or their exact clones) will behave exactly like natural neurons.
Another way to look at this problem of functionalism (the real issue here, I think)...
Consider this highly simplified diagram of the brain:
0-0-0-0-0-0
The zeros represent the neurons, the dashes represent the relations between neurons, presumably the activities in the synapses. You contend that provided the dashes exactly match the dashes in a real brain, it will make no difference how we construct the zeros. To test whether you really believed this, I asked if it would matter if we constructed the zeros out of beer cans and toilet paper. Somewhat to my astonishment, you replied that such a brain would still have consciousness by "logical necessity".
It seems very clear then that in your view the zeros merely play a functional role in supporting the seat of consciousness, which you see in the dashes.
Your theory may seem plausible, and it does allow for the tantalizing extropian idea of nano-neurons replacing natural neurons.
But before we become so excited that we forget the difference between a highly speculative hypothesis and something we must consider true by "logical necessity", consider a theory similar to yours but contradicting yours: in that competing theory the neurons act as the seat of consciousness while the dashes merely play the functional role. That functionalist theory of mind seems no less plausible than yours, yet it does not allow for the possibility of artificial neurons.
And neither functionalist theory explains how brains become conscious!
-gts
More information about the extropy-chat
mailing list