[ExI] The symbol grounding problem in strong AI

Christopher Doty suomichris at gmail.com
Sun Dec 20 20:36:48 UTC 2009


I just joined this list, and I'm kind of bummed that the first
discussion I see is one about the dreaded Chinese Room.*  Nonetheless,
my two cents:

The biggest issue I've seen in these emails seems to be the (implicit)
assumption that language should be our one and only way of determining
if a computer system is conscious/intelligent.  Is a program that only
produces, algorithmically, correct responses to language input
conscious?  I think not; it's a translation program.

But it does not then follow that ANY computer which correctly outputs
speech is also non-conscious/non-intelligent.  To say, e.g., that a
complete and accurate model of a human brain running on a computer
would not be conscious, based on Searle's argument, is a non sequitur.

Further, Searle's argument is pretty worthless, as it ignores the fact
that human beings process speech algorithmically.  Some words are easy
to define and clearly have a meaning (dog, run, sit, etc.) but all
languages have tons of words which native speakers wouldn't be able to
define or accurately describe the use of (the, a, which, etc.).
Nonetheless, native speakers know what they mean when they hear or use
these words.  Are we to say, based on Searle, that their lack of
understanding of the uses of these words, coupled with the fact that
they use them correctly, means that they don't actually speak the
language? I hope not!

The real test of consciousness, I think, is not simply that correct
outputs are given, but that outputs demonstrate that the inputs have
been incorporated into a general model about the world.  This would
show that the system actually does understand language (as
demonstrated by the fact that it correctly incorporates inputs into a
model), and that is capable of independent thought (by provide outputs
which, while being based on the inputs, demonstrate a unique insight
or perspective).

Chris

* Because, as a linguist, I despise thought experiments about
language.  Every one that I have ever seen takes some completely silly
premise, runs it to its end, and then applies its conclusion back to
actual language.  They seem to miss the fact that, by starting with a
completely arbitrary (and wrong) understanding of how language works,
the conclusions they draw aren't actually about real language--they're
about the silly idea of language that they made up.  It's
masturbation, basically: it's fun, but it doesn't tell you much about
sex.



More information about the extropy-chat mailing list