[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Sun Dec 20 19:29:01 UTC 2009


--- On Sun, 12/20/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> But it seems that you and Searle are saying that the CR
> lacks understanding *because* the man lacks understanding of
> Chinese, whereas the brain, with completely dumb components, has
> understanding.

The brain has understanding, yes, but Searle makes no claim about the dumbness or lack thereof of its components.  You added that to his argument.

He starts with the self-evident axiom that brains have understanding and then asks if Software/Hardware systems can ever have it too. He concludes they cannot based on his logical argument, which I've posted here several times.

> So you are penalising the CR because it has smart
> components and because what it does has an algorithmic pattern.

He penalizes the CR only because it runs a formal program, and nobody has shown how programs can have minds capable of understanding the symbols they manipulate. In other words, nobody has shown his formal argument false. If somebody has seen it proved false then point me to it.

I see people here like Eugen who scoff but who offer no evidence that Searle's logic fails. Is it just an article of religious faith on ExI that programs have minds? And if it is, and if we cannot explain how it happens, then should we adopt the mystical philosophy that everything has mind merely to protect the notion that programs do or will?

> By this reasoning, if neurons had their own separate rudimentary
> intelligence and if someone could see a pattern in the brain's
> functioning to which the term "algorithmic" could be applied, then
> the brain would lack understanding also.

No, Searle argues that even if we can describe brain processes algorithmically, those algorithms running on a S/H system would not result in understanding; that it's not enough merely to simulate a brain in software running on a computer.

S/H systems are not hardware *enough*. 

-gts



      



More information about the extropy-chat mailing list