[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Fri Dec 18 00:09:41 UTC 2009


--- On Wed, 12/16/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> What I have been considering is an artificial neuron. The
> artificial neuron consists of (1) a computer, (2) a computer program
> which simulates the chemical processes that take place in a
> biological neuron, and (3) I/O devices which allow interaction with a
> biological neuron. The I/O devices might include neurotransmitters,
> chemoreceptors, electrodes to measure electrical potentials
> or directly stimulate neurons, and so on. 

Let's go inside that neuron and look around. What do we see? 

I see a computer running a formal program, a program no different in principle from those running on the computer in front of me right now. That program has no understanding of the symbols it manipulates, yet it drives all the behavior of the neuron. On your account your brain runs billions of these mindless programs, and together they comprise the greater program that causes your thoughts and behaviors. But I see nothing in your scenario that explains how billions of mindless neurons come together to create mindfulness.

It doesn't matter to me if some of those neurons exist in the periphery, as integral parts of sense perception. We want to know how minds happen.

It seems to me that you can object by stating that each of the billions of programs really do have a mind, or that the larger program in which those programs exist only as modules has a mind, but then we've only rediscovered Searle's formal argument. 

So here we sit now inside one of your artificial neurons discussing the same subject that we've discussed in other messages: Searle's formal argument that programs are neither constitutive of nor sufficient for minds. 

-gts



      



More information about the extropy-chat mailing list