[ExI] The symbol grounding problem in strong AI

Aware aware at awareresearch.com
Sun Dec 20 19:55:09 UTC 2009


On Sun, Dec 20, 2009 at 11:29 AM, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> I see people here like Eugen who scoff but who offer no evidence that
> Searle's logic fails. Is it just an article of religious faith on ExI that
> programs have minds? And if it is, and if we cannot explain how it
> happens, then should we adopt the mystical philosophy that everything
> has mind merely to protect the notion that programs do or will?

It's not a problem with Searle's logic, but with his premises, and
those of most who argue against him in defense of a functionalist
account of consciousness which needs no defending.  These perennial
problems of qualia, consciousness and personal identity all revolve
around an assumption of an *essential* self-awareness that, however
seductive and deeply reinforced by personal observation, language and
culture, is entirely lacking in empirical support.  There is no
essential consciousness to be explained, but there is the very real
phenomenon of self-awareness, rife with gaps, distortions, delays and
confabulation, displayed by many adapted organisms, conferring obvious
evolutionary advantages in terms of the agent modeling its /self/
within its environment of interaction.  Such silly philosophical
questions are /unasked/ when one realizes that the system need not
have an essential experiencer to report experiences.

Okay, carry on...

- Jef



More information about the extropy-chat mailing list