[ExI] LLM's cannot be concious
Gordon Swobe
gordon.swobe at gmail.com
Thu Mar 23 16:39:45 UTC 2023
On Thu, Mar 23, 2023 at 10:11 AM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> Gordon's objection is at a more basic level, if I understand it correctly.
>
Yes, I think you understand exactly what I am saying, Adrian. It looks to
me like ChatGPT and other Large Language Models are something like
powerful, interactive, digital dictionaries or encyclopedias. They are
incredibly powerful tools, but it is a mistake attribute to them the
ability to actually know the meanings of the words they contain and process.
As humans, we tend to anthropomorphize our seemingly intelligent tools.
Asked what the time is, I might say "According to my watch, it is 10:30 AM"
but what I really mean is "According to me, referencing my watch as a tool,
it is 10 AM." My watch itself has no idea what the time is.
Likewise, chess computers do not really *know* how to play chess and
ChatGPT does not really know the meanings of the words it generates.
-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/c466da2f/attachment.htm>
More information about the extropy-chat
mailing list