[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Wed Dec 23 11:43:38 UTC 2009


--- On Mon, 12/21/09, Christopher Doty <suomichris at gmail.com> wrote:

> 2009/12/21 John Clark <jonkc at bellsouth.net>:
> >> even if computers *did* have consciousness, they
> still would have no
> >> understanding the meanings of the symbols
> contained in their programs.
> >
> > There may be stupider statements than the one that can
> be seen above, but I
> > am unable to come up with an example of one, at least
> right at this instant
> > off the top of my head.
> 
> The *entire* statement is not stupid.  

Thank you Chris. Some people understand the symbol grounding problem in formal programs. Others don't really care to understand it.

> Nonetheless, I'm hard-pressed to see how a computer to come
> to consciousness without having any understanding of any of
> the symbols in its programming......

It would not by virtue of its syntactic processing of symbols come to an understanding of them, i.e., syntax is not enough to give semantics. However I would not go so far as to say that conscious computers might not find some other way to get semantics, just as humans do. The point is that it would involve something other than running programs.

-gts



      



More information about the extropy-chat mailing list