[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Sun Dec 27 19:32:19 UTC 2009


--- On Sun, 12/27/09, BillK <pharos at gmail.com> wrote:

> And that's the important point for the future of humanity.
> We don't care whether the AGI is 'really' intelligent or just
> 'simulating' intelligence. It is the practical results that matter.

As I wrote near the outset of this discussion (to John Clark as I recall) some people care about the difference between strong and weak AI, some people don't. To those like me who care, Searle has something interesting to say.

-gts


      



More information about the extropy-chat mailing list