[ExI] The symbol grounding problem in strong AI

John Clark jonkc at bellsouth.net
Fri Dec 18 16:29:20 UTC 2009


On Dec 17, 2009, at 7:09 PM, Gordon Swobe wrote:

> Let's go inside that neuron and look around. What do we see? 
> I see a computer running a formal program, a program no different in principle from those running on the computer in front of me right now. That program has no understanding of the symbols it manipulates, yet it drives all the behavior of the neuron. On your account your brain runs billions of these mindless programs, and together they comprise the greater program that causes your thoughts and behaviors. But I see nothing in your scenario that explains how billions of mindless neurons come together to create mindfulness.

You want an explanation for mind and that is a very natural thing to want, but what does "explanation" mean? In general an explanation means breaking down a large complex and mysterious phenomenon until you find something that is understandable, it can mean nothing else. Science has done that with mind but you object that there must be more to it than that because the basic building block science has found is so mundane. Well of course it's mundane and simple, if it wasn't and that small part of the phenomena was still complex and mysterious then you haven't explained anything.

 John K Clark

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091218/a0cf90fd/attachment.html>


More information about the extropy-chat mailing list