[ExI] The symbol grounding problem in strong AI.

John Clark jonkc at bellsouth.net
Sun Jan 3 18:37:50 UTC 2010


On Jan 3, 2010, at 12:39 PM, Gordon Swobe wrote:

> You misunderstand me if you I think I believe consciousness and intelligence exist "separately" in humans or other animals. For most purposes we can consider them near synonyms or at least as handmaidens.

Well that's a start.

> The distinction does however become important in the context of strong AI research. Symbol grounding requires the sort of subjective first-person perspective that evolved in these machines we call humans

The operative word in the above is "evolved". Why did this mysterious "subjective symbol grounding" (bafflegab translation: consciousness) evolve? Not only can't you explain how this thing is supposed to work you can't explain how it came to be. Certainly Darwin would be no help as it would have absolutely no effect on behavior, in fact that is precisely why you think the Turing Test doesn't work. And even if it came about by pure chance it wouldn't last, in fact it would be detrimental as the resources used to generate consciousness could better be used for things that actually did something, like help get genes into the next generation. And yet consciousness exists? Why? 

We don't know a lot about consciousness but one of the few things we do know is that Darwin is screaming that intelligence and consciousness are two sides of the same coin.

 John K Clark

 




> , and which probably also evolved in other species. If we can duplicate it in software/hardware systems then they can have strong AI. Not really a complicated idea.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20100103/d0da9119/attachment.html>


More information about the extropy-chat mailing list