[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Sun Jan 3 16:20:14 UTC 2010


--- On Sun, 1/3/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

> Thank-you for clearly answering the question. 

Welcome.

suggested abbreviations and conventions:

m-neurons = material ("clockwork") artificial neurons
p-neurons = programmatic artificial neurons

Sam = the patient with the m-neurons
Cram = the patient with the p-neurons (CRA-man)

(If Sam and Cram look familiar it's because I used these names in a similar thought experiment of my own design.)

> Firstly, I understand that you have no philosophical
> objection to the idea that the clockwork neurons *could* have 
> consciousness, but you don't think that they *must* have consciousness, 
> since you don't (to this point) believe as I do that behaving like normal
> neurons is sufficient for this conclusion. Is that right? 

No, because I reject epiphenomenalism I think Sam cannot pass the TT without genuine intentionality. If Sam's m-neurons fail to result in a passing TT score for Sam then we have no choice but to take his m-neurons back to the store and demand a refund.

> Moreover, if consciousness is linked to substrate rather than function
> then it is possible that the clockwork neurons are conscious but with
> a different type of consciousness.

If Sam passes the TT and reports normal subjective experiences from m-neurons then I will consider him cured. I have no concerns about "type" of consciousness.

> Secondly, suppose we agree that clockwork neurons can give
> rise to consciousness. What would happen if they looked like
> conventional clockwork at one level but at higher resolution we could
> see that they were driven by digital circuits, like the digital mechanism
> driving most modern clocks with analogue displays? That is, would
> the low level computations going on in these neurons be enough to
> change or eliminate their consciousness?

Yes. In that case the salesperson deceived us. He sold us p-neurons in a box labeled m-neurons. And if we cannot detect the digital nature of these neurons from careful physical inspection and must instead conceive of some digital platonic realm that drives or causes material objects then you will have introduced into our experiment the quasi-religious philosophical idea of substance or property dualism.

> Finally, the most important point. The patient with the computerised 
> neurons behaves normally and says he feels normal.

Yes.

> Moreover, he actually believes he feels normal and that he understands 
> everything said to him, since otherwise he would tell us something is
> wrong. 

No, he does not "actually" believe anything. He merely reports that he feels normal and reports that he understands. His surgeon programmed all p-neurons such that he would pass the TT and report healthy intentionality, including but not limited to p-neurons in Wernicke's area.


> He processes the verbal information processed in the artificial part of 
> his brain (Wernicke's area) and passed to the rest of his brain 
> normally: for example, if you describe a scene he can draw
> a picture of it, if you tell him something amusing he will laugh, and
> if you describe a complex problem he will think about it and
> propose a solution. But despite this, he will understand nothing, and
> will simply have the delusional belief...

He will have no conscious beliefs delusional or otherwise. 

> That a person could be a zombie and not know it is
> logically possible, since a zombie by definition doesn't know anything; 
> but that a person could be a partial zombie and be systematically 
> unaware of this even with the non-zombified part of his brain seems to me
> incoherent. 

I see nothing incoherent about it except when you ask me to imagine the unimaginable as you did your last thought experiment.

In effect, the relevant parts of Cram's brain act like a computer, or mesh of computers, that run programs. That computer network receives symbolic inputs and generates symbolic outputs. Cram passes the TT yet he has no grasp of the meanings of the symbols his computerized brain manipulates. And if the surgeon programmed the p-neurons correctly then those parts of Cram's brain associated with "reporting subjective feelings" will run programs that ensure Cram will talk very much like Sam. 

We cannot distinguish Cram from Sam except with philosophical arguments. If we can then one patient or the other has not overcome his illness. One surgeon or the other failed to do his job.

> How do you know that you're not a partial zombie now, unable to
> understand anything you are reading? 

I know because I do understand your words and I know I do, (contrast this with your last experiment in which I could not even say with certainty that I existed, much less that I could understand anything).

> What reason is there to prefer normal neurons to computerised zombie 
> neurons given that neither you nor anyone else can ever notice a 
> difference? 

I notice the difference and I prefer existence.

> This is how far you have to go in order to maintain the belief that 
> neural function and consciousness can be separated. So why not accept the
> simpler, logically consistent and scientifically plausible
> explanation that is functionalism?

You assume here that I have followed your argument.

> I actually believe that semantics can *only* come from syntax,

As a programmer of syntax I want to believe that too. Hasn't happened. :) 


-gts



      



More information about the extropy-chat mailing list