[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Fri Jan 1 04:09:07 UTC 2010


2010/1/1 Gordon Swobe <gts_2000 at yahoo.com>:

>> After I die my mind can be instantiated again multiple
>> times, with different matter.
>
> I see. So then not only do you believe you have something like a soul (though you use this euphemism "sphere of mind") you believe also in the possible multiple reincarnations of your soul. Interesting.

Even if it turns out that the brain is uncomputable, the mind can be
duplicated by assembling atoms in the same configuration as the
original brain. If you accept this and you describe it as transfer of
a soul from one body to another, then you believe in a soul. Most
scientists and rational philosophers believe this but don't call it a
soul, preferring to reserve that term for a supernatural entity
created by God. Indeed, those who think your mind *won't* be
duplicated if your brain is duplicated at least tacitly believe in a
supernatural soul.

>> If the brain were identical with the mind this would
>> not be possible:
>
> Naturally you must also believe in the duality of mind and matter, an idea left over as if as a bad hangover from Descartes and other dualists. Your beliefs above would otherwise make no sense to you.

Are you a dualist regarding computer programs? On the one hand there
is the physical hardware implementing the program, and on the other
hand there is the abstract program itself. If that is dualism, then
the term could be equally well applied to the mind/body distinction.

>> These metaphysical musings are interesting but have no
>> bearing on the rigorous argument presented before,
>
> On the contrary, they must certainly do. I will tell you this in no uncertain terms: you will never understand Searle until learn to see past the sort of religious ideas you have presented above. And until you understand him, you won't understand what you need to do to refute his argument.
>
> You might start here:
>
> http://socrates.berkeley.edu/~jsearle/Consciousness1.rtf

There isn't actually anything in that paper with which I or most of
the others on this list who have been arguing with you will disagree.
The only serious error Searle makes is to claim that computer programs
can't generate consciousness while at the same time holding that the
brain can be described algorithmically. These two ideas lead to an
internal inconsistency, which is the worst sort of philosophical
error.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list