[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Mon Dec 21 00:44:00 UTC 2009


--- On Sun, 12/20/09, Aware <aware at awareresearch.com> wrote:

I think I've seen this sort of psychedelic word salad before, but under a less ambiguous moniker. Hello again jef. Long time. 

> "Symbol grounding" is a non-issue when you understand, as I
> tried to indicate earlier, that meaning (semantics) is not "in the
> mind" but in the *observed effect* due to a particular stimulus. 

I won't argue that it does not appear as an observed effect due to a stimulus, but if the word "mind" has any meaning then when I understand the meaning of anything else, I understand it there in my mind. 

> There is no "true, grounded meaning" of the stimulus

That's fine. Meaning != truth. 

> nor is there any local need for interpretation or an interpreter.  

Somebody sits here in my chair. He wants to interpret the meanings of your funny words. He sits here locally. Really.

> Our evolved nature is frugal; there is stimulus and the system's 
> response, and any "meaning" is that reported by an observer, whether
> that observer is another person, or even the same person associated with 
> that mind.  

Good that you at least you allow the existence of minds. That's a start. Now then when that observer reports the meaning of a word to or in his own mind, whence comes the understanding of the meaning? More importantly, how do we get that in software? How can a program get semantics?

> We act according to our nature within context. 

I've seen that "within context" qualifier many times before also from the from the same jef I remember. :)

-gts


      



More information about the extropy-chat mailing list