[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Fri Dec 25 02:15:01 UTC 2009


--- On Thu, 12/24/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> We replace Wernicke's area with an artificial analogue 
> that is as unnatural, robotlike and (it is provisionally assumed)
> mindless as we can possibly make it. 

I have no concerns about how "robotlike" you might make your artificial neurons. I don't assume that natural neurons do not also behave robotically. 

I do however assume that natural neurons do not run formal programs like those running now on your computer. (If they do then I must wonder who wrote them.)

> The only requirement is that it masquerade as normal for the benefit 
> of the neural tissue with which it interfaces.

You have not shown that the effects that concern us here do not emanate in some way from the interior behaviors and structures of neurons. As I recall the electrical activities of neurons takes place inside them, not outside them, and it seems very possible to me that this internal electrical activity has an extremely important role to play.

> This subject should have no understanding of language... 

I don't jump so easily to conclusions. 

-gts


      



More information about the extropy-chat mailing list