[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Thu Dec 24 02:39:15 UTC 2009


--- On Wed, 12/23/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

>> Or perhaps nothing "amazing" happens. Instead of
>> believing in magic, I find easier to accept that the
>> computationalist theory of mind is simply incoherent. It
>> does not explain the facts.
> 
> So you find the idea that in some unknown way chemical
> reactions cause mind not particularly amazing, while the same happening
> with electric circuits is obviously incredible?

Not exactly, but close. Brains contain something like electric circuits but I still find it incredible that a mind that runs only on programs can have everything biological minds have. Again, I find the computationalist theory of mind incredible. 

> A computer only runs a formal program in the mind of the
> programmer. 

Where did you buy your computer? I built mine, and I can tell you it runs formal programs in RAM. :)

> A computer undergoes internal movements according to the laws
> of physics, which movements can (incidentally) be described
> algorithmically. This is the most basic level of
> description. 

Yes.

> The programmer comes along and gives a chunkier, higher level
> The program is like a plan to help the programmer figure out where to
> place the various parts of the computer in relation to each other so 
> that they will do a particular job. Both the computer and the brain 
> go clickety-clack, clickety-clack and produce similar intelligent 
> behaviour. The computer's parts were deliberately arranged by the 
> programmer in order to bring this result about, whereas the brain's 
> parts were arranged in a spontaneous and somewhat haphazard way by 
> nature, making it more difficult to see the algorithmic pattern 
> (although it must be there, at least at the level of basic physics). In 
> the final analysis, it is this difference between them that convinces 
> you the computer doesn't understand what it's doing and the brain does.

No, in the final analysis nothing can get meaning (semantics) from form-based rules (syntax). It makes not one wit of difference what sort of entity happens to perform the syntactical operations. Neither computers nor biological brains can get semantics from syntax. 


> What is it to learn the meaning of the word "dog" if not to
> associate its sound or shape with an image of a dog?

Both you and the computer make that association, and both of you act accordingly. But only you know about it, i.e, only you know the meaning.


> Anyway, despite the above, and without any help from
> Searle, it might still seem reasonable to entertain the possibility 
> that there is something substrate-specific about consciousness, and 
> fear that if you agree to upload your brain the result would be a 
> mindless zombie. 

I would not use the word "substrate-specific" but I do like your mention of "chemical reactions" in your first paragraph above. 

> That is where the partial brain replacement (eg. of the visual
> cortex or Wernicke's area) thought experiment comes into play,

I've given this idea some thought today, by the way.

We can take your experiment deeper, and instead of creating a program driven nano-neuron to substitute for the natural neuron, we keep everything about the natural neuron and replace only the nucleus. This neuron will appear even more natural than yours. Now we take it another step and keep the nucleus. We create artificial program-driven DNA (whatever that might look like) to replace the DNA inside the nucleus. And so on. In the limit we will have manufactured natural program-less neurons.

I don't know if Searle (or anyone) has considered the ramifications of this sort of progression that I describe in terms of Searle's philosophy, but it seems to me that on Searle's view the person's intentionality would become increasingly apparent to him as his brain became driven less by abstract formal programs and more by natural material processes.

This also leaves open the possibility that your more basic nano-neurons, those you've already supposed, would not deprive the subject completely of intentionality. Perhaps your subject would become somewhat dim but not completely lose his grip on reality. 


-gts





      



More information about the extropy-chat mailing list