[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Fri Dec 18 12:17:58 UTC 2009


--- On Fri, 12/18/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> You seem to accept that dumb matter which itself does not
> have understanding can give rise to understanding, but not 
> that an appropriately programmed computer can pull off the 
> same miracle. Why not?

Biological brains do something we don't yet understand. Call it X. Whatever X may be, it causes the brain to have the capacity for intentionality. We don't yet know the details of X but if we cannot refute Searle then we must say this about it:

X != the running of formal syntactical programs.

X = some biological process that takes place in the brain in addition to, or instead of, running programs.


By the way ignore those who say we can't define consciousness. 

If it has subjective understanding of anything whatsoever -- in the common parlance if it can hold in anything whatsoever in mind -- then it has consciousness. I prefer the word intentionality for our purposes here, defined roughly as the holding of anything whatsoever in mind. I would use the word intentionality in place of the more nebulous word consciousness more often except that this use of the word makes sense mainly to philosophers (one can confuse it easily with the ordinary meaning of intentionality, which has to do with goal oriented thinking). 

Another sign of consciousness: things that have it can overcome the symbol grounding problem. 

We find all these things in any good philosophical definition of consciousness: subjective understanding, subjective experience, semantics, intentionality, the capacity to overcome the symbol grounding problem. If a thing has any one of them then it has the rest of them and it has consciousness.

Someone mentioned computer chess programs.

As I have it, chess programs have intelligence but not intentionality. They play chess, and they do it intelligently, but they don't *know* how to play to chess. They have unconscious machine intelligence and nothing more. 

A chess application with strong AI would have intentionality. Not only would it play chess well, it would also have chess strategy consciously "in mind" just as human players do.

Because the problem of strong AI so defined seems intractable (in part because of Searle's work, but also because even AGI seems almost impossible, which needn't even be strong) many people have simply forgotten the problem of strong AI, or swept it under the rug or otherwise just scoffed and gone into denial. It seems we have some of those deniers right here on this list, the last place in the world that one should expect to find them.

-gts


      



More information about the extropy-chat mailing list