[ExI] The symbol grounding problem in strong AI

Christopher Doty suomichris at gmail.com
Mon Dec 21 23:48:10 UTC 2009


2009/12/21 John Clark <jonkc at bellsouth.net>:
>> even if computers *did* have consciousness, they still would have no
>> understanding the meanings of the symbols contained in their programs.
>
> There may be stupider statements than the one that can be seen above, but I
> am unable to come up with an example of one, at least right at this instant
> off the top of my head.

The *entire* statement is not stupid.  It is certainly possible that a
conscious computer could correctly respond to questions about, e.g.,
colors, even though all it knew of them were definitions about
wavelengths and hadn't ever "seen" or processed any images.  This
might also go for human emotions.  To take a cheesy example, "love"
might be understood by a computer as a sort of motivating principle in
human society, how it arose via evolutionary processes, etc., but
without knowing, in some sense, what it means to "love."

Nonetheless, I'm hard-pressed to see how a computer to come to
consciousness without having any understanding of any of the symbols
in its programming......

Chris



More information about the extropy-chat mailing list