[ExI] The symbol grounding problem in strong AI
    John Clark 
    jonkc at bellsouth.net
       
    Tue Dec 15 16:17:02 UTC 2009
    
    
  
On Dec 15, 2009, at 8:28 AM, Gordon Swobe wrote:
> I do take issue with your assumption that your artificial neurons will (by "logical necessity", as you put it in another message) produce exactly the same experience as real neurons merely by virtue of their having the same "interactions with their neighbours" as real neurons, especially in the realm of consciousness. We simply don't know if that's true. 
So you think those neighboring neurons will respond differently even if the stimulus they receive is identical. And it all depends on the inner workings of neurons not on how they communicate their output to the outside world. In other words you believe in a soul. I don't.
 John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091215/8dc8a8fc/attachment.html>
    
    
More information about the extropy-chat
mailing list