[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Sat Dec 19 14:48:26 UTC 2009


--- On Sat, 12/19/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> > After a complete replacement of my brain with your
> nano-neuron brain...
> 
> It's important that you consider first the case of
> *partial* replacement, eg. all of your visual cortex but the rest of
> the brain left intact.

I based all my replies, each with which you disagreed, on a complete replacement because the partial just seems too speculative to me. (The complete replacement is extremely speculative as it is!)

I simply don't know (nor do you or Searle) what role the neurons in the visual cortex play in conscious awareness. Do they only play a functional role as I think you suppose, as mere conduits of visual information to consciousness, or do they also play a role in the conscious experience? I don't know and I don't think anyone does.

Perhaps Searle ventured to guess, but I don't think we can spear him for being a good sport and playing along with the game.

> You're going very far in postulating this strange theory of
> partial zombiehood 

I think I postulated a highly speculative theory of complete zombiehood, built on top of your already highly speculative design for computerized artificial neurons that behave identical to natural neurons, and on top of your highly speculative theory that you could use them to replace my natural neurons without killing me in the process. Lots of speculation going on there, and your name appears on a lot of it. :-)

> The alternative simpler and more plausible theory is
> that if the artificial neurons reproduce the behaviour of
> biological neurons, then they also reproduce the consciousness 
> of biological neurons.

I disagree completely. We simply cannot logically deduce consciousness from considering behavior alone, no matter whether we consider the behavior of neurons or of brains or of persons or of doorknobs. In fact the behaviorist school of psychology came along for exactly that reason. The idea also infiltrated the philosophy of mind. (John Clark has tried to hang with me with that rope, by the way, but in the process he denied his own intentionality.) 

Likewise, we cannot deduce from evidence of the behavior of a neuron that it has what the brain needs to produce a mind. At best we can hope it has what it needs to produce the correct behavior of the organism. I can hold that assumption in mind only long enough to play the zombie game.

-gts




> 
> > In all the above except the second to last, I lack
> intentionality.
> >
> >
> >> Well how about this theory: it's not the program
> that has
> >> consciousness, since a program is just an
> abstraction. It's
> >> the physical processes the machine undergoes while
> running the
> >> program that causes the consciousness. Whether
> these processes can
> >> be interpreted as a program or not doesn't change
> their
> >> consciousness.
> >
> > I don't think S/H systems have minds but I do think
> you've pointed in the right direction. I think matter
> matters. More on this another time.
> 
> But you also think that if the matter behaves in such a way
> that it
> can be interpreted as implementing a computer program it
> lacks
> consciousness. The CR lacks understanding because the man
> in the room,
> who can be seen as implementing a program, lacks
> understanding;
> whereas a different system which produces similar behaviour
> but with
> dumb components the interactions of which can't be
> recognised as
> algorithmic has understanding. You are penalising the CR
> because it
> has something extra in the way of pattern and
> intelligence.
> 
> 
> -- 
> Stathis Papaioannou
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> 


      



More information about the extropy-chat mailing list