[ExI] The symbol grounding problem in strong AI

John Clark jonkc at bellsouth.net
Tue Dec 15 05:48:07 UTC 2009


On Dec 14, 2009, at 9:10 PM, Gordon Swobe wrote:

>> Consciousness is easy to explain and that's the problem
> 
> Easy to explain? 

Yep, very easy to explain. Only one thing can produce consciousness, a size 12 foot. By the way I happen to ware size 12 shoes. 

It's just as good as any other consciousness theory. 

> Muhammad Ali knocked George Foreman out in the 8th round. If consciousness is easy to explain then perhaps you will kindly explain exactly what happened between Foreman's ears that made him lose consciousness, and exactly what happened a few moments later that enabled him to regain it. 

I don't have one scrap of information that Mr. Foreman was conscious either before or after that blow, all I know is that his behavior became much less interesting after Mr. Ali gave him that rather vigorous tap on the head. In that instant Mr.Foreman became much less intelligent and I make no claim of having an intelligence theory because unlike consciousness intelligence theories are damn hard to come by.

> A Nobel prize awaits.

I've already got my airline tickets to Stockholm.
 
 John K Clark 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20091215/75039991/attachment.html>


More information about the extropy-chat mailing list