[ExI] The symbol grounding problem in strong AI
Stefano Vaj
stefano.vaj at gmail.com
Mon Dec 28 13:19:16 UTC 2009
2009/12/28 Stathis Papaioannou <stathisp at gmail.com>
> So (a) is incoherent and (b) implies the existence of an immaterial
> soul that does your thinking in concert with the brain until you mess
> with it by putting in artificial neurons. That leaves (c) as the only
> plausible alternative.
>
It sounds plausible enough to me.
But, once more, isn't the whole issue pretty close to ko'an questions such
as "what kind of noise makes a falling tree when nobody hears its falling?".
What obfuscates the AGI debate is IMHO an abuse of poorly defined terms
such as "conscience", etc., in an implicitely metaphysical and essentialist,
rather than phenomenical, sense. This does not bring one very far either in
the discussion of "artificial" brains nor in the understanding of organic
ones.
--
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091228/1e79be50/attachment.html>
More information about the extropy-chat
mailing list