[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Sun Dec 27 04:05:55 UTC 2009


2009/12/27 Gordon Swobe <gts_2000 at yahoo.com>:

> Well if you read my post from the other day (you never replied to the relevant portion of it) I allowed that if the programs replace only a negligible part of the material brain processes they simulate, they would negate the subject's intentionality/consciousness to a similarly negligible degree.
>
>>> You have not shown that the effects that concern us
>> here do not emanate in some way from the interior behaviors
>> and structures of neurons. As I recall the electrical
>> activities of neurons takes place inside them, not outside
>> them, and it seems very possible to me that this internal
>> electrical activity has an extremely important role to
>> play.
>>
>> The electrical activity consists in a potential difference
>> across the neuron's cell membrane due to ion gradients. However, to
>> be sure you have correctly modelled the behaviour of the neuron...

My reply was that the internal processes of the neuron need to be
taken into consideration in order to properly simulate it. It doesn't
matter if part of the neuron, the whole neuron, or a large chunk of
the brain are artificial: as long as the simulation is adequate, there
will be no change in consciousness.

> I will in the next day or so if time allows write a separate post for the sole purpose of explaining what I see as the logical fallacy in your behaviorist/functionalist arguments. I wrote one already (the post with "0-0-0-0" diagram) but I see it didn't leave any lasting impression on you even if you never offered any counter-arguments. So I'll try putting another one together.

Perhaps you missed this post:

> Another way to look at this problem of functionalism (the real issue here, I think)...
>
> Consider this highly simplified diagram of the brain:
>
> 0-0-0-0-0-0
>
> The zeros represent the neurons, the dashes represent the relations between neurons, presumably the activities in the synapses. You contend that provided the dashes exactly match the dashes in a real brain, it will make no difference how we construct the zeros. To test whether you really believed this, I asked if it would matter if we constructed the zeros out of beer cans and toilet paper. Somewhat to my astonishment, you replied that such a brain would still have consciousness by "logical necessity".
>
> It seems very clear then that in your view the zeros merely play a functional role in supporting the seat of consciousness, which you see in the dashes.
>
> Your theory may seem plausible, and it does allow for the tantalizing extropian idea of nano-neurons replacing natural neurons.
>
> But before we become so excited that we forget the difference between a highly speculative hypothesis and something we must consider true by "logical necessity", consider a theory similar to yours but contradicting yours: in that competing theory the neurons act as the seat of consciousness while the dashes merely play the functional role. That functionalist theory of mind seems no less plausible than yours, yet it does not allow for the possibility of artificial neurons.

It is not my theory, it is standard functionalism. The thought
experiment shows that if you replicate the function of the brain, you
must also replicate the consciousness.

In your simplified brain above suppose the two leftmost neurons are
sensory neurons in the visual cortex and the rest are neurons in the
association cortex and motor cortex. The sensory neurons receive input
from the retina, process this information and send output to
association and motor cortex neurons, including neurons in Wernicke's
and Broca's area which end up moving the muscles that produce speech.
We then replace the sensory neurons 0 with artificial neurons X,
giving:

X-X-0-0-0-0

Now, the brain receives visual input from the retina. This is
processed by the X neurons, which send output to the 0 neurons. As far
as the 0 neurons are concerned, nothing has changed: they receive the
same inputs as if the change had not been made, so they behave the
same way as they would have originally, and the brain's owner produces
speech correctly describing what he sees and declaring that it all
looks just the same as before. It's trivially obvious to me that this
is what *must* happen. Can you explain how it could possibly be
otherwise?



-- 
Stathis Papaioannou



More information about the extropy-chat mailing list