[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Fri Dec 18 03:32:17 UTC 2009


2009/12/18 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Wed, 12/16/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> What I have been considering is an artificial neuron. The
>> artificial neuron consists of (1) a computer, (2) a computer program
>> which simulates the chemical processes that take place in a
>> biological neuron, and (3) I/O devices which allow interaction with a
>> biological neuron. The I/O devices might include neurotransmitters,
>> chemoreceptors, electrodes to measure electrical potentials
>> or directly stimulate neurons, and so on.
>
> Let's go inside that neuron and look around. What do we see?
>
> I see a computer running a formal program, a program no different in principle from those running on the computer in front of me right now. That program has no understanding of the symbols it manipulates, yet it drives all the behavior of the neuron. On your account your brain runs billions of these mindless programs, and together they comprise the greater program that causes your thoughts and behaviors. But I see nothing in your scenario that explains how billions of mindless neurons come together to create mindfulness.

The carbon, hydrogen, oxygen, nitrogen etc. atoms in the brain don't
have either consciousness or intelligence, but when they jostle each
other according to the laws of physics, intelligence and consciousness
emerge. What sort of explanation as to how this happens (over and
above the observation that it does happen) could possibly satisfy you?
This is what Chalmers calls the "hard problem" of consciousness, in
contrast to the nuts-and-bolts "easy problem" that neuroscience
attempts to answer. I prefer to avoid it as a pseudo-problem.

> It doesn't matter to me if some of those neurons exist in the periphery, as integral parts of sense perception. We want to know how minds happen.
>
> It seems to me that you can object by stating that each of the billions of programs really do have a mind, or that the larger program in which those programs exist only as modules has a mind, but then we've only rediscovered Searle's formal argument.
>
> So here we sit now inside one of your artificial neurons discussing the same subject that we've discussed in other messages: Searle's formal argument that programs are neither constitutive of nor sufficient for minds.

Except that there is no reason to believe this unless you assume it to
begin with. You may as well assert that atoms are neither constitutive
of nor sufficient for minds. But given that (a) atoms can give rise to
mind, (b) the behaviour of atoms can be modelled by a computer
program, and (c) replacing the atoms with the model of the atoms gives
rise to the same mind, it follows that computer programs can give rise
to minds. You don't agree with (c) but you haven't answered what you
think would happen if part of your brain was replaced with a network
of artificial neurons controlled by a computer model. Either you would
say that everything feels the same or you would say that something
feels different. Have a guess, which would it would be?


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list