[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Mon Dec 14 22:08:03 UTC 2009


2009/12/15 Gordon Swobe <gts_2000 at yahoo.com>:
> Re-reading your last paragraph, Stathis, it seems you want to know what I think about replacing neurons in the visual cortex with artificial neurons that do *not* have the essential ingredient for consciousness. I would not dare speculate on that question, because I have no idea if conscious vision requires that essential ingredient in those neurons, much less what that essential ingredient might be.
>
> I agree with your general supposition, however, that we're missing some important ingredient to explain consciousness. We cannot explain it by pointing only to the means by which neurons relate to other neurons, i.e., by Chalmer's functionalist theory, at least not at this time in history.
>
> Functionalism seems a very reasonable religion, and reason for hope, but I don't see it as any more than that.

It is generally accepted that visual perception occurs in the visual
cortex; without it, some reflexes remain, such as the pupillary
response to light, but you don't experience seeing anything. In any
case, the thought experiment could be done with any part of the brain.
Advanced nanoprocessor controlled devices which behave just like
neurons but, being machines, lack the special ingredient for
consciousness that neurons have, are installed in place of part of
your brain, the visual cortex being good for illustration purposes.
You are then asked if you notice anything different. What will you
say? Before answering, consider carefully the implications of the fact
that the essential feature of the artificial neurons is that they
behave just like biological neurons in their interactions with their
neighbours.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list