[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Sat Dec 26 15:05:45 UTC 2009


--- On Thu, 12/24/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> There is no real distinction between program level, machine
> level or atomic level. These are levels of description, for the benefit > of the observer, and a description of something has no causal efficacy.

If there exists "no real distinction" then why do you think what applies at the program level does not also apply at the hardware level or of the system in its entirety?

In any case the formal nature of programs becomes most obvious at the machine level, where the machine states represented by 1's and 0's have no semantic content either real or imagined. These two symbols and the states they represent differ in *form only* and this is what is meant by "formal program." On these meaningless differences in form turn the gears of computer logic. 

Out of these meaningless differences in form we have created high level languages and with them programs that create the illusion of semantics and intrinsic intentionality, i.e., understanding. I see this remarkable achievement this as evidence of human ingenuity -- not as evidence that computers really have minds. 


> Understanding is something that is associated with
> understanding-like, or intelligent, behaviour. 

Understanding is associated with intelligent behavior, yes, but the two things do not equal one another.

> A program is just a 
> plan in your mind or on a piece of paper to help you arrange matter 
> in such a way as to give rise to this intelligent behaviour. 

Exactly. Programs act as blueprints for intelligent *behavior*.

> There was no plan behind the brain, but post hoc analysis can reveal 
> patterns which have an algorithmic description (provided that the 
> physics in the brain is computable). Now, if such patterns in the brain 
> do not detract from its understanding, why should similar patterns 
> detract from the understanding of a computer? 

If computers had understanding then those patterns we might find and write down would not detract from their understanding any more than do patterns of brain behavior detract from the brain's understanding. But how can computers that run formal programs have understanding? 

> In both cases you can claim that the understanding comes from the actual 
> physical structure and behaviour, not from the description of that 
> physical structure and behaviour.

I don't claim computers have understanding. They act as if they have it (as in weak AI) but they do not actually have it (as in strong AI). 

Let us say machine X has strong AI, and that we abstract from it a formal program that exactly describes and determines its intelligent behavior. We then run that abstracted formal program on a software/hardware system called computer Y. Computer Y will act exactly like machine X but it will have only weak AI. (If you get that then you've gotten what there is to get! :-)

Formal programs exist as abstract simulations. They do not equal the things they simulate. They contain the forms of things but not the substance of things.

-gts


      



More information about the extropy-chat mailing list