[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Fri Dec 18 15:36:43 UTC 2009


--- On Fri, 12/18/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> The level of description which you call a computer program
> is, in the final analysis, just a set of rules to help you figure 
> out exactly how you should arrange a collection of matter so that it
> exhibits a desired behaviour

Our task here involves more than mimicking intelligent human behavior (weak AI). Strong AI is not about behavior of neurons or brains or computers. It's about *mindfulness*. 

I don't disagree (nor would Searle) that artificial neurons such as those you describe might produce intelligent human-like behavior. Such a machine might seem very human. But would it have intentionality as in strong AI, or merely seem to have it as in weak AI?

If programs drive your artificial neurons (and they do) then Searle rightfully challenges you to show how those programs that drive behavior can in some way constitute a mind, i.e., he challenges you to show that you have not merely invented weak AI, which he does not contest.

> That you can describe the chemical reactions in the brain
> algorithmically should not detract from the brain's consciousness,

True.
 
> so why should an algorithmic description of a computer in action 
> detract from the computer's consciousness?

Programs that run algorithms do not and cannot have semantics. They do useful things but have no understanding of the things they do. Unless of course Searle's formal argument has flaws, and that is what is at issue here.

-gts


      



More information about the extropy-chat mailing list