[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Fri Jan 1 16:01:31 UTC 2010


2010/1/2 Gordon Swobe <gts_2000 at yahoo.com>:

> I happen to agree that we can duplicate a brain atom for atom and have the same person at the end (if I didn't then I would not identify with extropianism) but you had asserted something in a previous post suggesting that your "abstract sphere of mind" exists independently of the physical matter that comprises your brain. In my opinion you fall off the rails there and wander into the land of metaphysical dualism.

You destroy a person and make a copy, and you have the "same" person
again even if the original has been dead a million years. The physical
object doesn't survive, but the mind does; so the mind is not the same
as the physical object. Whether you call this dualism or not is a
matter of taste.

>> Are you a dualist regarding computer programs?
>
> No, but you on the other hand should describe yourself as such given that you believe we can get intentional entities from running programs. The conventional strong AI research program is based on that same false premise, where software = mind and hardware = brain, and it won't work for exactly that reason.

I was referring to ordinary programs that aren't considered conscious.
The program is not identical with the computer, since the same program
can be instantiated on different hardware. If you want to call that
dualism, you can.

>> The only serious error Searle makes is to claim that
>> computer programs can't generate consciousness while at the same
>> time holding that the brain can be described algorithmically.
>
> No error at all, except that you cannot or will not see past your dualist assumptions, or at least not far enough to see what Searle actually means. I had hoped that paper I referenced would bring you some clarity but I see it didn't.

As I said, I agree with that paper. I just think he's wrong about
computers and their potential for consciousness, which in that he only
alludes to in passing.

> What you cannot or refuse to see is that a formal program simulating the brain cannot cause consciousness in a s/h system that implements it any more than can a simulated thunderstorm cause wetness in that same s/h system. It makes no difference how perfectly that simulation describes the thing it simulates.
>
> If you expect to find consciousness in or stemming from a computer simulation of a brain then I would suppose you might also expect to eat a photo of a ham sandwich off a lunch menu and find that it tastes like the ham sandwich it simulates. After all, on your logic the simulation of the ham sandwich is implemented in the substrate of the menu. But that piece of paper won't taste much like a ham sandwich, now will it? And why not? Because, as I keep trying to communicate to you, simulations of things do not equal the things they simulate. Descriptions of things do not equal the things they describe.

You keep repeating this, but I have shown that a device which
reproduces the behaviour of a biological brain will also reproduce the
consciousness. The argument is robust in that it relies on no other
philosophical or scientific assumptions. How the brain behaviour is
reproduced is not actually part of the argument. If it turns out that
the brain's behaviour can be described algorithmically, as Searle and
most cognitive scientists believe, then that establishes
computationalism; if not, it still establishes functionalism by
another means.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list