[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Mon Jan 4 02:01:36 UTC 2010


--- On Sun, 1/3/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

Revisiting this question:

> Firstly, I understand that you have no philosophical
> objection to the idea that the clockwork neurons *could* have 
> consciousness, but you don't think that they *must* have consciousness, 
> since you don't (to this point) believe as I do that behaving like normal
> neurons is sufficient for this conclusion. Is that right?

In my last to you I referred to the m-neurons actually used in the experiment. They either work in which case the patient passes the TT and reports normal intentionality and gets released from the hospital, or they don't.

But in re-reading your words I understand that you really want to know if I agree that they needn't work or fail solely by virtue of their inputs and outputs. Yes I agree with that, as you already know. We simply do not know what neurons must contain to allow a brain to become conscious, but I'd bet that artificial neurons stuffed only with mashed potatoes and gravy won't do the trick, even if we engineer somehow at the edge to output the correct neurotransmitters into the synapses.

> I actually believe that semantics can *only* come from
> syntax, but if it can't, your fallback is that semantics
> comes from the physical activity inside brains. 

Something along those lines, yes. But we can't paste form onto substance and expect intrinsic intentionality, and that's all formal programs do to hardware substance. We might just as well write a letter and expect the letter to understand the words.

-gts


      



More information about the extropy-chat mailing list