[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Tue Jan 5 13:31:30 UTC 2010


--- On Tue, 1/5/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

> Is there only one type of c-neuron or is it possible to
> insert m-neurons which, though they are functionally identical to
> b-neurons, result in a different kind of consciousness?

I don't know what you mean by "different kind of consciousness". I will say this: if m-neurons cure our man Sam and he then takes LSD, it will affect his conscious experience just as it would for anyone else. 
 
> Philosophy has to give an answer that's in accordance with
> what would actually happen, what you would actually experience,
> otherwise it's worse than useless. The discussion we have been having 
> is an example of a philosophical problem with profound practical
> consequences. If I get a new super-fast computerised brain and you're 
> right I would be killing myself, whereas if I'm right I would become an
> immortal super-human. I think it's important to be sure of the
> answer before going ahead!

True. On the other hand perhaps you could view it something like Pascal's wager. You have little to lose by believing in digital immortality. If it doesn't work for you then you won't know about it. And you'll never know the truth from talking to the zombies who've tried it. When you ask them, they always say it worked just fine. 

> You shouldn't dismiss the slippery slope argument so quickly. Either 
> you suddenly become a zombie when a certain proportion of your neurons 
> internal workings are computerised or you don't. If you don't, then the > option is that you don't become zombified at all or that you become 
> zombified in proportion to how much of the neurons is computerised. 
> Either sudden or gradual zombification seems implausible
> to me. 

Gradual zombification seems plausible to me. In fact we've already discussed this same problem but with a different vocabulary. A week or two ago, I allowed that negligible formal programmification (is that a word?) of real brain processes would result only in negligible loss of intentionality.

>> I think that after the initial operation he becomes a
>> complete basket-case requiring remedial surgery, and that in
>> the end he becomes a philosophical zombie or something very
>> close to one. If his surgeon has experience then he becomes
>> a zombie or near zombie on day one.
> 
> I don't understand why you say this. Perhaps I haven't
> explained what I meant well. The p-neurons are drop-in replacements 
> for the b-neurons, just like pulling out the LM741 op amps in a
> piece of audio equipment and replacing them with TL071's. The TL071
> performs the same function as the 741 and has the same pin-out, so the
> equipment will function just the same

You've made the same assumption (wrongly imo) as in your last experiment that p-neurons will behave and function exactly like the b-neurons they replaced. They won't except perhaps under epiphenomenalism. If you accept epiphenomenalism and reject the common and in my opinion much more sensible view that experience affects behavior, including neuronal behavior, then we need to discuss that philosophical problem before we can go forward.

It looks to me that serious complications will arise for the first surgeon who attempts this surgery with p-neurons. 

By the way this conclusion seems much more apparent to me in this new experimental set-up of yours. In your last, I wrote something about how the subject might turn left when he might otherwise have turned right. In this experiment I see that he might turn left onto a one-way street in the wrong direction. Fortunately for Cram (or at least for his body) the docs won't release him from the hospital until he passes the TT and reports normal subjective experiences. Cram's surgeon will keep replacing and programming neurons wherever necessary in his brain until his patient appears ready for life on the streets, zombifying him in the process.

> Now, isn't it clear from this that Cram must behave
> normally and must (at least) have normal experiences in the parts of 
> his brain which aren't replaced

No, see above.

> If Cram has neurons in his language centre replaced then he
> must be able to communicate normally and respond to verbal input
> normally in every other way: draw a picture, laugh with genuine
> amusement at a joke, engage in philosophical debate. He must also
> genuinely believe that he understands everything, since if he didn't 
> he would tell us.

No he would not tell us! The surgeon programmed Cram to behave normally and to lie about his subjective experience, all the while believing naively that his efforts counted as cures for the symptoms and side-effects his patient reported.

Philosophical zombies have no experience. They know nothing whatsoever, but they lie about it. 

-gts




      



More information about the extropy-chat mailing list