[ExI] Bender's Octopus (re: LLMs like ChatGPT)
Gordon Swobe
gordon.swobe at gmail.com
Fri Mar 24 03:15:47 UTC 2023
On Thu, Mar 23, 2023 at 8:39 PM Will Steinberg via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> I don't have a lot of faith in a person who has a hypothesis and designs a
> thought experiment that is essentially completely irrelevant to the
> hypothesis.
>
As I wrote, I agree the thought experiment does not illustrate her point
clearly, at least outside of the context of her academic paper. As I've
mentioned, the octopus is supposed to represent the state in which an LLM
is in -- completely disconnected from the meanings of words (referents)
that exist only outside of language in the real world represented by the
islands. But it is a sloppy thought experiment if you don't know what she
is trying to say.
It is about form vs meaning. LLMs are trained only on and only know (so to
speak) the forms and patterns of language. They are like very talented
parrots, rambling on and on in seemingly intelligent ways, mimicking human
speech, but never having any idea what they are talking about.
-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/dd6d75b1/attachment.htm>
More information about the extropy-chat
mailing list