[ExI] The symbol grounding problem in strong AI
    John Clark 
    jonkc at bellsouth.net
       
    Wed Dec 16 05:49:24 UTC 2009
    
    
  
On Dec 15, 2009, at 5:25 PM, Gordon Swobe wrote:
> an artificial neuron must behave in exactly the same way to external stimuli as does a natural neuron if and only if the internal processes of that artificial neuron exactly matches those of the natural neuron.
Now that's just silly, a neuron has no way of knowing what  internal process a  neighboring neuron undergoes, it treats it as a black box.  It's only interested in what it does, not how it does it.
> To test whether you really believed this, I asked if it would matter if we constructed the zeros out of beer cans and toilet paper. Somewhat to my astonishment, you replied that such a brain would still have consciousness by "logical necessity". 
I'll be damned if I know why you were astonished, and I'll be damned to understand how it could be anything other than a logical necessity. And I don't understand the point you are trying to make, what's wrong with beer cans and toilet paper?
 John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091216/ca1bf024/attachment.html>
    
    
More information about the extropy-chat
mailing list