[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Wed Dec 16 01:26:28 UTC 2009


2009/12/16 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Tue, 12/15/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> ... the neighbouring neurons *must*
>> respond in the same way with the artificial neurons in place as
>> with the original neurons.
>
> Not so. If you want to make an argument along those lines then I will point out that an artificial neuron must behave in exactly the same way to external stimuli as does a natural neuron if and only if the internal processes of that artificial neuron exactly matches those of the natural neuron. In other words, we can know for certain only that natural neurons (or their exact clones) will behave exactly like natural neurons.

What is required is that the artificial neuron have appropriate I/O
devices to interact with the environment and, internally, that it be
able to compute what a biological neuron would do so that it can put
out the appropriate outputs at the appropriate times. Moreover, if you
consider a volume of artificial neurons only those near the surface of
that volume need have I/O devices, such as stores of neurotransmitters
to squirt into synapses, since only they will be interfacing will the
biological neurons. So the question is: is it possible to simulate the
physical processes inside a neuron on a computer? Searle agrees that
it is possible, and says so explicitly in the passage I quoted before:

<Is there some description of the brain such that under that
description you could do a computational simulation of the operations
of the brain. But since according to Church's thesis, anything that
can be given a precise enough characterization as a set of steps can
be simulated on a digital computer, it follows trivially that the
question has an affirmative answer. The operations of the brain can be
simulated on a digital computer in the same sense in which weather
systems, the behavior of the New York stock market or the pattern of
airline flights over Latin America can.>

> Another way to look at this problem of functionalism (the real issue here, I think)...
>
> Consider this highly simplified diagram of the brain:
>
> 0-0-0-0-0-0
>
> The zeros represent the neurons, the dashes represent the relations between neurons, presumably the activities in the synapses. You contend that provided the dashes exactly match the dashes in a real brain, it will make no difference how we construct the zeros. To test whether you really believed this, I asked if it would matter if we constructed the zeros out of beer cans and toilet paper. Somewhat to my astonishment, you replied that such a brain would still have consciousness by "logical necessity".
>
> It seems very clear then that in your view the zeros merely play a functional role in supporting the seat of consciousness, which you see in the dashes.
>
> Your theory may seem plausible, and it does allow for the tantalizing extropian idea of nano-neurons replacing natural neurons.
>
> But before we become so excited that we forget the difference between a highly speculative hypothesis and something we must consider true by "logical necessity", consider a theory similar to yours but contradicting yours: in that competing theory the neurons act as the seat of consciousness while the dashes merely play the functional role. That functionalist theory of mind seems no less plausible than yours, yet it does not allow for the possibility of artificial neurons.

It is not my theory, it is standard functionalism. The thought
experiment shows that if you replicate the function of the brain, you
must also replicate the consciousness.

In your simplified brain above suppose the two leftmost neurons are
sensory neurons in the visual cortex and the rest are neurons in the
association cortex and motor cortex. The sensory neurons receive input
from the retina, process this information and send output to
association and motor cortex neurons, including neurons in Wernicke's
and Broca's area which end up moving the muscles that produce speech.
We then replace the sensory neurons 0 with artificial neurons X,
giving:

X-X-0-0-0-0

Now, the brain receives visual input from the retina. This is
processed by the X neurons, which send output to the 0 neurons. As far
as the 0 neurons are concerned, nothing has changed: they receive the
same inputs as if the change had not been made, so they behave the
same way as they would have originally, and the brain's owner produces
speech correctly describing what he sees and declaring that it all
looks just the same as before. It's trivially obvious to me that this
is what *must* happen. Can you explain how it could possibly be
otherwise?


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list