[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Fri Jan 1 16:22:21 UTC 2010


-- On Thu, 12/31/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> Even if it turns out that the brain is uncomputable, the
> mind can be duplicated by assembling atoms in the same configuration
> as the original brain.

I happen to agree that we can duplicate a brain atom for atom and have the same person at the end (if I didn't then I would not identify with extropianism) but you had asserted something in a previous post suggesting that your "abstract sphere of mind" exists independently of the physical matter that comprises your brain. In my opinion you fall off the rails there and wander into the land of metaphysical dualism.

> Are you a dualist regarding computer programs?

No, but you on the other hand should describe yourself as such given that you believe we can get intentional entities from running programs. The conventional strong AI research program is based on that same false premise, where software = mind and hardware = brain, and it won't work for exactly that reason.

> The only serious error Searle makes is to claim that
> computer programs can't generate consciousness while at the same
> time holding that the brain can be described algorithmically.

No error at all, except that you cannot or will not see past your dualist assumptions, or at least not far enough to see what Searle actually means. I had hoped that paper I referenced would bring you some clarity but I see it didn't.

What you cannot or refuse to see is that a formal program simulating the brain cannot cause consciousness in a s/h system that implements it any more than can a simulated thunderstorm cause wetness in that same s/h system. It makes no difference how perfectly that simulation describes the thing it simulates.

If you expect to find consciousness in or stemming from a computer simulation of a brain then I would suppose you might also expect to eat a photo of a ham sandwich off a lunch menu and find that it tastes like the ham sandwich it simulates. After all, on your logic the simulation of the ham sandwich is implemented in the substrate of the menu. But that piece of paper won't taste much like a ham sandwich, now will it? And why not? Because, as I keep trying to communicate to you, simulations of things do not equal the things they simulate. Descriptions of things do not equal the things they describe.

-gts



      



More information about the extropy-chat mailing list