[ExI] LLM's cannot be concious

Gordon Swobe gordon.swobe at gmail.com
Sat Mar 18 09:39:19 UTC 2023


I think those who think LLM  AIs like ChatGPT are becoming conscious or
sentient like humans fail to understand a very important point: these
software applications only predict language. They are very good at
predicting which word should come next in a sentence or question, but they
have no idea what the words mean. They do not and cannot understand what
the words refer to. In linguistic terms, they lack referents.

Maybe you all already understand this, or maybe you have some reasons why I
am wrong.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230318/0be77837/attachment.htm>


More information about the extropy-chat mailing list