[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Wed Jan 6 12:59:31 UTC 2010


--- On Tue, 1/5/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

>> No, I make no such claim. Cram's surgeon will no doubt
>> find a way to keep the man walking, even if semantically
>> brain-dead from the effective lobotomization of his
>> Wernicke's and related.
> 
> Well, Searle makes this claim. 

I don't think Searle ever considered a thought experiment exactly like the one we created here. In any case, in this experiment, I simply deny your claim that my position entails that the surgeon cannot keep the man walking. 

The surgeon starts with a patient with a semantic deficit caused by a brain lesion in Wernicke's area. He replaces those damaged b-neurons with p-neurons believing just as you do that they will behave and function in every respect exactly as would have the healthy b-neurons that once existed there. However on my account of p-neurons, they do not resolve the patient's symptoms and so the surgeon goes back in to attempt more cures, only creating more semantic issues for the patient. 

The surgeon keeps patching the software so to speak until finally the patient does speak and behave normally, not realizing that each patch only further compromised his patient's intentionality. In the end he succeeds in creating a patient who reports normal experiences and passes the Turing test, oblivious to the fact that the patient also now has little or no experience of understanding words assuming he has any experience at all.

-gts




> Perhaps
> there is some confusion because Searle is talking about
> simulating a
> whole brain, not a neuron, but if you can make a zombie
> brain it
> should certainly be possible to make a zombie neuron.
> That's what a
> p-neuron is: it acts just like a b-neuron, the b-neurons
> around it
> think it's a b-neuron, but because it's computerised, you
> claim, it
> lacks the essentials for consciousness. By definition, if
> the
> p-neurons function as advertised they can be swapped for
> the
> equivalent b-neuron and the person will behave exactly the
> same and
> honestly believe that nothing has changed.
> 
> If you *don't* believe p-neurons like this are possible
> then you
> disagree with Searle. Instead, you believe that there is
> some aspect
> of brain physics that is uncomputable, and therefore that
> weak AI and
> philosophical zombies may not be possible. This is a
> logically
> consistent position, while Searle's is not. However, there
> is no
> scientific evidence that the brain uses uncomputable
> physics.
> 
> 
> -- 
> Stathis Papaioannou
> 


      



More information about the extropy-chat mailing list