[ExI] The symbol grounding problem in strong AI

John Clark jonkc at bellsouth.net
Tue Dec 15 20:42:41 UTC 2009


On Dec 15, 2009, Gordon Swobe wrote:

> I do take issue with your assumption that your artificial neurons will (by "logical necessity", as you put it in another message) produce exactly the same experience as real neurons merely by virtue of their having the same "interactions with their neighbours" as real neurons, especially in the realm of consciousness. We simply don't know if that's true. 

Of course we know that's true!. The only way a neuron knows what its neighbor is doing is by examining that neurons output. 

 John K Clark

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091215/fbd8201a/attachment.html>


More information about the extropy-chat mailing list