[ExI] Language models are like mirrors

Gordon Swobe gordon.swobe at gmail.com
Tue Apr 4 02:22:41 UTC 2023


On Mon, Apr 3, 2023 at 4:09 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I don't know if someone else has already noted this (I'm still catching up
> on the recent flood of posts), but don't you consider it ironic that you
> are using the systems own apparent understanding of itself to show that it
> doesn't understand things?
>

Yes, I've noticed this and mentioned that I find it not only ironic, but
hilarious, that they are themselves explaining their limitations the same
way I did on this list some 15 years ago when such things as ChatGPT were
only hypothetical.

Philosophers will often bracket or use scare-quotes as shortcuts to
represent different senses of a word. When I agree that ChatGPT
"understands" that it does not actually understand word meanings, this is
only shorthand for my saying that the software identifies statistical
relationships and patterns in English word-symbols that allow it to compose
sentences and paragraphs and entire stories and many other kinds of
documents that are meaningful to us but not to it. As ChatGPT-4 "agrees,"
it functions as a highly sophisticated autocomplete feature not unlike what
is found in any word processing software, just far more powerful as it has
been trained on a massive amount of written material.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230403/9526d319/attachment.htm>


More information about the extropy-chat mailing list