[ExI] Meaningless Symbols

Gordon Swobe gts_2000 at yahoo.com
Wed Jan 13 12:44:50 UTC 2010


--- On Tue, 1/12/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

> I don't accept that semantics does not come from syntax
> because I don't see where else, logically, semantics could come from.
> However, if I accept it for the sake of argument, you have agreed in
> the past that running a program incidentally will not destroy
> semantics. So it is possible for you to consistently to hold that 
> semantics does not come from syntax *and* that computers can have 
> semantics, due to their substance or their processes, just as in the 
> case of the brain.

No, not if by "computer" you mean "software/hardware system".  

Although we might call the brain a type of computer, we cannot call it a computer of the s/h system type because the brain has semantics and s/h systems do not.

Your p-neurons equal s/h systems, and in your thought experiments you network these s/h systems and then imagine that networked s/h systems have semantics.

> Yes, but the man in the room has an advantage over the
> neurons in the brain, because he at least understands that he is 
> doing some sort of weird task, while the neurons understand nothing at 
> all. You would have to conclude that if the CR does not understand
> Chinese, then a Chinese speaker's brain understands it even less.

I would only draw that conclusion if I did not accept that real chinese brains are not s/h systems. In other words, I think you miss the lesson of the experiment, which is that real brains/minds do something we don't yet fully understand. They ground symbols, something s/h systems cannot do.

This leads to the next phase in the argument: that real brains have evolved a biological, non-digital means for grounding symbols. 

> I think it is logically impossible to create weak AI
> neurons. If weak AI neurons were possible, then it would be 
> possible to arbitrarily remove any aspect of your consciousness 
> leaving you not only behaving as if nothing had changed but also 
> unaware that anything had changed. This would seem to go against any 
> coherent notion of consciousness: however mysterious and ineffable it 
> may be, you would at least expect that if your consciousness changed, 
> for example if you suddenly went blind or aphasic, that you would notice 
> something a bit out of the ordinary had happened. If you think that 
> imperceptible radical change in consciousness is not self-contradictory, 
> then I suppose weak AI neurons are logically possible. But you would 
> then have the problem of explaining how you know now that you have not 
> gone blind or aphasic without realising it, and why you should care if 
> you had such an affliction.

If you replace the neurons associated with "realizing it" then the patient will not realize it. If you leave those neurons alone but replace the neurons in other important parts of the brain, the patient will become a basket case in need of more surgery, as we have discussed already.

It seems to me that in your laboratory you create many kinds of strange Frankenstein monsters that think and do many absurd and self-contradictory things, depending on which neurons you replace, and that you then try to draw meaningful conclusions based on the disturbed thoughts and behaviors of the monsters that you have yourself created.

In the final analysis, will a person whose brain consists entirely of p-neurons have strong AI? I think the answer is no, for the same reason that I think a network of ordinary computers does not. 

-gts



      



More information about the extropy-chat mailing list