[ExI] Meaningless Symbols

Gordon Swobe gts_2000 at yahoo.com
Wed Jan 13 02:23:50 UTC 2010


--- On Tue, 1/12/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

>> Now we know more about the mind than we did before,
> even if we don't yet know the complete answer.
> 
> It's not much of an answer. I was hoping you might say
> something like,understanding is due to a special chemical reaction in 
> the brain...

Well, yes, clearly neurons and neurochemistry and other biological factors in the brain enable our understanding of symbols. Sorry I can't tell you exactly how the science works; neuroscience still has much work to do. But this conclusion seems inescapable. To deny it one must leave the sane world of philosophical monism and enter into the not-so-sane world of dualism in which mental phenomena exist in some ephemeral netherworld, or into the similarly not-so-sane world of idealism in which matter does not even exist. But of course I'm making some value judgments here; dualists and idealists have rights to express their opinions too.

> In all that you and Searle have said, the strongest
> statement you can make is that a computer that is programmed to 
> behave  like a brain will not *necessarily* have the consciousness of 
> the brain. 

I can say this with extremely high confidence: semantics does not come from syntax, and software/hardware systems as they exist today merely run syntactical programs. For this reason s/h systems of today cannot have semantics, i.e., they cannot overcome the symbol grounding problem. 

Many philosophers have offered rebuttals to Searle's argument, but none of the reputable rebuttals deny the basic truth that the man in the room cannot understand symbols from manipulating them according to rules of syntax. It just can't happen.

(And the truth is that it's even worse than it seems: not only does the semantics come from the human operators, but so too does the syntax. This means that even if computers could get semantics from syntax, we would not be able to say that computers derive semantics independent of their human operators. But that's another story...)

> In contrast, I have presented an argument which shows that
> it is *impossible* to separate understanding from behaviour. 

You and I both know that philosophical zombies do not defy any rules of logic. So I don't know what you mean by "impossible". In fact to my way of thinking your experiments do exactly that: they create semi-robots that act like they have intentionality but don't, or which have compromised intentionality. They create weak AI.

More in the morning if I get a minute.

-gts


      



More information about the extropy-chat mailing list