[ExI] The symbol grounding problem in strong AI

John Clark jonkc at bellsouth.net
Thu Dec 17 17:39:51 UTC 2009


On Dec 16, 2009,  Gordon Swobe wrote:

> But did you understand that the "this artificial neuron" to which you referred exists only as a computer simulation? I.e., only as some lines of code, only as some zeros and ones, only some 'on' and 'offs', only as some stuff going on in RAM?

There is a natural neuron in your head right now that exists only as a collection of atoms that move around by gaining or losing electrons. RAM works by gaining and losing electrons too.

> And do you really hold the position that contrary to Searle's claim, this artificial neuron that I've described has consciousness?

I don't think that one neuron, artificial or otherwise, has consciousness but apparently you do. You said that it's the internal state of the neuron that's important for consciousness not how it communicates to other neurons, even though that obviously is the only thing that can determine the large-scale behavior of the being. If you're right then one neuron would be sufficient for consciousness. I don't think you're right. 

 John K Clark

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091217/8235386a/attachment.html>


More information about the extropy-chat mailing list