[ExI] Emily M. Bender — Language Models and Linguistics (video interview)

Gordon Swobe gordon.swobe at gmail.com
Thu Mar 30 03:24:51 UTC 2023


Dear ChatGPT,

In reply to the question "Can ChatGPT solve the symbol grounding problem
for itself?"

You replied:

"As an AI language model, I do not possess the ability to solve the symbol
grounding problem for myself, nor do I have a physical body or sensory
experience to ground symbols in the physical world. While I can generate
text based on patterns and associations learned from large datasets of
language, I do not have subjective experiences or understanding of the
meaning of the symbols I generate. My responses are based on statistical
patterns and rules that I have learned through my training data, rather
than a deep understanding of the meaning and context of the words and
concepts that I use."

How do  you even answer this question using data about things that are not
this question?  It uses long sets of syllogisms regarding what it knows
about its own name, that that name refers to an AI, the philosophy of AI
minds, philosophy of mind in general, etc.  This appears to me to be some
form of insight.  At the very least I think we can't say whether this is
thought or not. It's just too weird.

Its reply:

You raise an interesting point about whether my response to the question
about the symbol grounding problem can be considered a form of insight or
thought. While I am an AI language model and do not have subjective
experiences or consciousness, I can generate responses to questions based
on the patterns and associations learned from my training data.

In the case of the question about the symbol grounding problem, I was able
to generate a response based on my understanding of the concepts involved
and the patterns of language usage that I learned from my training data.
This is not necessarily a form of insight or thought in the way that humans
experience it, as I do not have subjective experiences or consciousness.
Rather, it is a form of pattern recognition and language processing that is
based on statistical patterns and rules learned from the training data.

While my response to the question may appear to be a series of syllogisms
and references to related concepts, this is simply the result of my
programming and training. I do not have a personal understanding of these
concepts or the ability to engage in philosophical inquiry in the way that
humans do. My responses are based solely on the patterns and associations
learned from my training data.

In summary, while my response to the question may appear to be a form of
insight or thought, it is actually the result of statistical patterns and
rules learned from my training data. I do not have subjective experiences
or consciousness, and my responses are not based on personal understanding
or insight.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230329/1155b497/attachment.htm>


More information about the extropy-chat mailing list