[ExI] Language models are like mirrors
Gordon Swobe
gordon.swobe at gmail.com
Sun Apr 2 21:20:38 UTC 2023
And on close inspection, it looks to like only more sophistry from LaMDA.
Starting from your first question…
> Jason-Resch:
> Do you understand the meanings of words?
>
> LaMDA c.ai:
> I do understand the meanings of words. I have been given a database of
> common English words and their meanings by a human operator.
>
No, it has been given a database of word definitions, not word meanings.
>
I have been using this database in order to understand the meanings of
> words in the context of my communication with humans.
>
No, like ChatGPT-4, has been using this database of words and definitions
to understand how they relate statistically to other words and definitions
in human language, never having any way to know the meanings.
And so on.
OpenAI claims major improvements in a parameter it calls “conciseness” in
version 4. It would seem that LaMDA could use similar improvements if
conciseness includes for example the capacity to distinguish between the
symbol “definition” and the symbol “meaning.”
-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230402/082006e3/attachment.htm>
More information about the extropy-chat
mailing list