[ExI] Emily M. Bender — Language Models and Linguistics (video interview)

Will Steinberg steinberg.will at gmail.com
Thu Mar 30 03:12:49 UTC 2023


On Mon, Mar 27, 2023, 10:48 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> "Can ChatGPT solve the symbol grounding problem for itself?"
>
> "As an AI language model, I do not possess the ability to solve the symbol
> grounding problem for myself, nor do I have a physical body or sensory
> experience to ground symbols in the physical world. While I can generate
> text based on patterns and associations learned from large datasets of
> language, I do not have subjective experiences or understanding of the
> meaning of the symbols I generate. My responses are based on statistical
> patterns and rules that I have learned through my training data, rather
> than a deep understanding of the meaning and context of the words and
> concepts that I use."
>
> -gts
>
>
How does it even answer this question using data about things that are not
this question?  It uses long sets of syllogisms regarding what it knows
about its own name, that that name refers to an AI, the philosophy of AI
minds, philosophy of mind in general, etc.  This appears to me to be some
form of insight.  At the very least I think we can't say whether this is
thought or not.  It's just too weird.

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230329/3ebd5250/attachment.htm>


More information about the extropy-chat mailing list