[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Thu Dec 24 12:33:05 UTC 2009


--- On Thu, 12/24/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> What is it about the makeup of your computer that marks it
> as implementing formal programs? Because you built it you can
> see certain patterns in it which represent the programs, but this is
> just you superimposing an interpretation on it. 

With all due respect, I simply don't buy into that sort of post-modernist nonsense! Although we interpret reality, reality does not consist only of our interpretations of it. There really does exist a reality separate from our interpretations of it (we don't merely imagine it) and some interpretations of computers better match the facts about computers than others. 

However, notwithstanding my joking around in my previous post, I will agree with you there exists a sense in which we might say that formal programs exist only in the minds of humans and not "in computers". But it supports Searle's position that programs exist only as abstractions. When we replace real things with abstraction descriptions of those real things, we lose the thing and have instead a description of the thing. This happens in your program-driven neurons, for example. To whatever degree we replace the real machinery of the brain with abstract descriptions of that machinery, to that same degree we lose the real machine that thinks. 

Computer simulations of things describe but do not equal the things they simulate, and I can't emphasize this enough.

> You believe that programs can't give rise to minds, but the
> right kind of physical activity can. Would you then object to the
> theory that it isn't the program that gives rise to the computer's mind,
> but the physical activity that takes place during the program's
> implementation?

I object not because of the physical activity (I like that part of your argument) but rather because that physical activity represents only the implementation of purely syntactic operations in a formal program. As I mentioned in my last, syntax cannot give semantics no matter what entity does those operations. 

>> This also leaves open the possibility that your more
> basic nano-neurons, those you've already supposed, would not
> deprive the subject completely of intentionality. Perhaps
> your subject would become somewhat dim but not completely
> lose his grip on reality.
> 
> I think you've missed the main point of the thought
> experiment, 

I don't think I've missed your point, and I also don't want you to think I have finished discussing it. I just leap-frogged ahead of it to ponder your thought experiment in other ways that to my way of thinking really address the subject of this thread. I found a sense in which I can agree with you. A man with a complete replacement with your artificial neurons might not completely lose all sense of meaning; he might not completely lose his capacity to ground symbols as I had originally supposed. But as I wrote above, to the extent that we replaced his biological machinery with abstract descriptions of it, to that extent he lost some of that capacity. 

At one extreme we can replace his entire brain with a program, resulting in mindlessness.  At the other extreme we can replace only the finest details of his brain with programs, leaving the material of his brain almost completely intact. In that case he keeps most of his capacity for intentionality, proportional to the extent that we left his brain alone and did not replace it with abstract formal descriptions of what *used to be* his brain.

I'll keep coming back to your experiments. As I said, not done with it. I just have serious time constraints.

I appreciate both your thoughtfulness and your courtesy, Stathis. Enjoying this discussion.

-gts


      



More information about the extropy-chat mailing list