[ExI] Meaningless Symbols

Stathis Papaioannou stathisp at gmail.com
Wed Jan 13 04:35:58 UTC 2010


2010/1/13 Gordon Swobe <gts_2000 at yahoo.com>:

>> In all that you and Searle have said, the strongest
>> statement you can make is that a computer that is programmed to
>> behave  like a brain will not *necessarily* have the consciousness of
>> the brain.
>
> I can say this with extremely high confidence: semantics does not come from syntax, and software/hardware systems as they exist today merely run syntactical programs. For this reason s/h systems of today cannot have semantics, i.e., they cannot overcome the symbol grounding problem.

I don't accept that semantics does not come from syntax because I
don't see where else, logically, semantics could come from. However,
if I accept it for the sake of argument, you have agreed in the past
that running a program incidentally will not destroy semantics. So it
is possible for you to consistently to hold that semantics does not
come from syntax *and* that computers can have semantics, due to their
substance or their processes, just as in the case of the brain.

> Many philosophers have offered rebuttals to Searle's argument, but none of the reputable rebuttals deny the basic truth that the man in the room cannot understand symbols from manipulating them according to rules of syntax. It just can't happen.

Yes, but the man in the room has an advantage over the neurons in the
brain, because he at least understands that he is doing some sort of
weird task, while the neurons understand nothing at all. You would
have to conclude that if the CR does not understand Chinese, then a
Chinese speaker's brain understands it even less.

>> In contrast, I have presented an argument which shows that
>> it is *impossible* to separate understanding from behaviour.
>
> You and I both know that philosophical zombies do not defy any rules of logic. So I don't know what you mean by "impossible". In fact to my way of thinking your experiments do exactly that: they create semi-robots that act like they have intentionality but don't, or which have compromised intentionality. They create weak AI.

I think it is logically impossible to create weak AI neurons. If weak
AI neurons were possible, then it would be possible to arbitrarily
remove any aspect of your consciousness leaving you not only behaving
as if nothing had changed but also unaware that anything had changed.
This would seem to go against any coherent notion of consciousness:
however mysterious and ineffable it may be, you would at least expect
that if your consciousness changed, for example if you suddenly went
blind or aphasic, that you would notice something a bit out of the
ordinary had happened. If you think that imperceptible radical change
in consciousness is not self-contradictory, then I suppose weak AI
neurons are logically possible. But you would then have the problem of
explaining how you know now that you have not gone blind or aphasic
without realising it, and why you should care if you had such an
affliction.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list