[ExI] Meaningless Symbols

Stathis Papaioannou stathisp at gmail.com
Wed Jan 13 13:55:17 UTC 2010


2010/1/13 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Tue, 1/12/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> I don't accept that semantics does not come from syntax
>> because I don't see where else, logically, semantics could come from.
>> However, if I accept it for the sake of argument, you have agreed in
>> the past that running a program incidentally will not destroy
>> semantics. So it is possible for you to consistently to hold that
>> semantics does not come from syntax *and* that computers can have
>> semantics, due to their substance or their processes, just as in the
>> case of the brain.
>
> No, not if by "computer" you mean "software/hardware system".
>
> Although we might call the brain a type of computer, we cannot call it a computer of the s/h system type because the brain has semantics and s/h systems do not.
>
> Your p-neurons equal s/h systems, and in your thought experiments you network these s/h systems and then imagine that networked s/h systems have semantics.

Running formal programs does not (you claim) produce semantics, but
neither does it prevent semantics. Therefore, computers can have
semantics by virtue of some quality other than running formal
programs.

>> Yes, but the man in the room has an advantage over the
>> neurons in the brain, because he at least understands that he is
>> doing some sort of weird task, while the neurons understand nothing at
>> all. You would have to conclude that if the CR does not understand
>> Chinese, then a Chinese speaker's brain understands it even less.
>
> I would only draw that conclusion if I did not accept that real chinese brains are not s/h systems. In other words, I think you miss the lesson of the experiment, which is that real brains/minds do something we don't yet fully understand. They ground symbols, something s/h systems cannot do.

That misses the point of the CRA, which is to show that the man has no
understanding of Chinese, therefore the system has no understanding of
Chinese. The argument ought not assume from the start that the CR has
no understanding of Chinese on account of it being a S/H system, since
that is the point at issue. So with the brain: the neurons don't
understand Chinese, therefore the brain doesn't understand Chinese.
But the brain does understand Chinese; so the claim that if the
components of a system don't have understanding then neither does the
system is not valid.

> This leads to the next phase in the argument: that real brains have evolved a biological, non-digital means for grounding symbols.
>
>> I think it is logically impossible to create weak AI
>> neurons. If weak AI neurons were possible, then it would be
>> possible to arbitrarily remove any aspect of your consciousness
>> leaving you not only behaving as if nothing had changed but also
>> unaware that anything had changed. This would seem to go against any
>> coherent notion of consciousness: however mysterious and ineffable it
>> may be, you would at least expect that if your consciousness changed,
>> for example if you suddenly went blind or aphasic, that you would notice
>> something a bit out of the ordinary had happened. If you think that
>> imperceptible radical change in consciousness is not self-contradictory,
>> then I suppose weak AI neurons are logically possible. But you would
>> then have the problem of explaining how you know now that you have not
>> gone blind or aphasic without realising it, and why you should care if
>> you had such an affliction.
>
> If you replace the neurons associated with "realizing it" then the patient will not realize it. If you leave those neurons alone but replace the neurons in other important parts of the brain, the patient will become a basket case in need of more surgery, as we have discussed already.

No, he won't become a basket case. If the patient's visual cortex is
replaced and the rest of his brain is intact then (a) he will behave
as if he has normal vision because his motor cortex receives the same
signals as before, and (b) he will not notice that anything has
changed about his vision, since if he did he would tell you and that
would constitute a change in behaviour, as would going crazy. These
two things are *logically* required if you accept that p-neurons of
the type described are possible. There are several ways out of the
conundrum:

(1) p-neurons are impossible, because they won't behave like b-neurons
(i.e. there is something uncomputable about the behaviour of neurons);
(2) p-neurons are possible, but zombie p-neurons are impossible;
(3) zombie p-neurons are possible and your consciousness will fade
away without you noticing if they are installed in your head;
(4) zombie p-neurons are possible and you will notice your
consciousness fading away if they are installed in your head but you
won't be able to do anything about it.

That covers all the possibilities. I favour (2). Searle favours (4),
though apparently without realising that it entails an implausible
form of dualism (your thinking is done by something other than your
brain which functions in lockstep with your behaviour until the
p-neurons are installed). Your answer is that the patient will go mad,
but that simply isn't possible, since by the terms of the experiment
his brain is constrained to behave as sanely as it would have without
any tampering. I suspect you're making this point because you can see
the absurdity the thought experiment is designed to demonstrate but
don't feel comfortable committing to any of the above four options to
get out of it.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list