[ExI] LLMs cannot be conscious

Gordon Swobe gordon.swobe at gmail.com
Sun Mar 19 06:03:29 UTC 2023


Consider that LLMs are like dictionaries. A complete dictionary can give
you the definition of any word, but that definition is in terms of other
words in the same dictionary. If you want to understand  *meaning* of any
word definition,  you must look up the definitions of each word in the
definition, and then look up each of the words in those definitions, which
leads to an infinite regress.

Dictionaries do not actually contain or know the meanings of words, and I
see no reason to think LLMs are any different.

-gts




 Sat, Mar 18, 2023, 3:39 AM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> I think those who think LLM  AIs like ChatGPT are becoming conscious or
> sentient like humans fail to understand a very important point: these
> software applications only predict language. They are very good at
> predicting which word should come next in a sentence or question, but they
> have no idea what the words mean. They do not and cannot understand what
> the words refer to. In linguistic terms, they lack referents.
>
> Maybe you all already understand this, or maybe you have some reasons why
> I am wrong.
>
> -gts
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230319/55c9fd84/attachment.htm>


More information about the extropy-chat mailing list