[ExI] The symbol grounding problem in strong AI

Mike Dougherty msd001 at gmail.com
Sat Dec 19 05:03:24 UTC 2009


2009/12/18 John Clark <jonkc at bellsouth.net>

> You want an explanation for mind and that is a very natural thing to want,
> but what does "explanation" mean? In general an explanation means breaking
> down a large complex and mysterious phenomenon until you find something that
> is understandable, it can mean nothing else. Science has done that with mind
> but you object that there must be more to it than that because the basic
> building block science has found is so mundane. Well of course it's mundane
> and simple, if it wasn't and that small part of the phenomena was still
> complex and mysterious then you haven't explained anything.
>

And the simple/mundane mechanisms seem generally more reliable than
complex/fancy mechanisms :: be glad our brains work on relatively cheap
neurons, else they might not work at all.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091219/897fc01a/attachment.html>


More information about the extropy-chat mailing list