[ExI] The symbol grounding problem in strong AI
Aware
aware at awareresearch.com
Sun Dec 20 23:29:17 UTC 2009
On Sun, Dec 20, 2009 at 2:15 PM, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> --- On Sun, 12/20/09, Aware <aware at awareresearch.com> wrote:
>
>> There is no essential consciousness to be explained, but there is the
>> very real phenomenon of self-awareness, rife with gaps, distortions,
>> delays and confabulation, displayed by many adapted organisms,
>> conferring obvious evolutionary advantages in terms of the agent
>> modeling its /self/ within its environment of interaction.
>
> More to the point, we have this phenomenon to which I referred in the title of the thread: symbol grounding.
>
> Frankly for all I really care consciousness does not exist. But symbol grounding does seem to happen by some means. The notion of consciousness seems to help explain it but it doesn't matter. If we cannot duplicate symbol grounding in programs then it seems we can't have strong AI in S/H systems.
"Symbol grounding" is a non-issue when you understand, as I tried to
indicate earlier, that meaning (semantics) is not "in the mind" but in
the *observed effect* due to a particular stimulus. There is no
"true, grounded meaning" of the stimulus, nor is there any local need
for interpretation or an interpreter. Our evolved nature is frugal;
there is stimulus and the system's response, and any "meaning" is that
reported by an observer, whether that observer is another person, or
even the same person associated with that mind. We act according to
our nature within context. Awareness of self, and meaning, are useful
addons, which, as is to be expected of their function of
discriminating self from other, will, if asked, always refer to that
agent as their self.
- Jef
More information about the extropy-chat
mailing list