[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Fri Dec 25 08:42:39 UTC 2009


2009/12/25 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Thu, 12/24/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> We replace Wernicke's area with an artificial analogue
>> that is as unnatural, robotlike and (it is provisionally assumed)
>> mindless as we can possibly make it.
>
> I have no concerns about how "robotlike" you might make your artificial neurons. I don't assume that natural neurons do not also behave robotically.
>
> I do however assume that natural neurons do not run formal programs like those running now on your computer. (If they do then I must wonder who wrote them.)

Natural neurons do not run human programming languages but they do run
algorithms, insofar as their behaviour can be described
algorithmically. At the lowest level there is a small set of rules,
the laws of physics, which rigidly determine the future state and
output of the neuron from the present state and input. That the
computer was engineered and the neuron evolved should make no
difference: if running a program destroys consciousness then it should
do so in both cases. On the other hand, if the abstract program cannot
give rise to consciousness then in either the computer or the neuron
you can attribute the consciousness to the physical activity
associated with running the program.

>> The only requirement is that it masquerade as normal for the benefit
>> of the neural tissue with which it interfaces.
>
> You have not shown that the effects that concern us here do not emanate in some way from the interior behaviors and structures of neurons. As I recall the electrical activities of neurons takes place inside them, not outside them, and it seems very possible to me that this internal electrical activity has an extremely important role to play.

The electrical activity consists in a potential difference across the
neuron's cell membrane due to ion gradients. However, to be sure you
have correctly modelled the behaviour of the neuron you have to model
all of its internal processes. For example, when exposed to a certain
pattern of inputs a neuron may decide to upregulate the number of a
particular type of receptor on its surface, which involves complex
coordination of activity in the nucleus, ribosomes, mitochondria, in
fact most of the organelles and subsystems of the cell. So, in order
to successfully masquerade as a biological neuron, the artificial
neuron must be able to compute exactly what the biological neuron
would have done with its receptors, and alter its output and response
to input accordingly. Such a molecular level model would be beyond the
capability of modern computers, and the field of computational
neuroscience in large part involves creating simplified models which
computers can cope with. However, there is no guarantee that the
simplified model won't deviate from the natural behaviour, in which
case the subject with the neural prosthesis might well both experience
a subjective change and behave differently.

>> This subject should have no understanding of language...
>
> I don't jump so easily to conclusions.

I am sure that if Wernicke's area were replaced with artificial
neurons close enough in behaviour to the biological neurons then the
subject would understand language normally. You, on the other hand,
have been stating all along that if the artificial neurons pulls off
the masquerade by means of running a computer program then despite the
external appearance of understanding there will be no actual
understanding. But this state of affairs would create a very strange
situation: the subject would think he understands, give appropriate
responses to questions, and feel that nothing at all has changed as a
result of the experiment, while in fact he understands nothing at all.
So what is the difference between pseudo-understanding and real
understanding, and how can you be sure you aren't now reading this
with pseudo-understanding?


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list