[ExI] LLMs cannot be conscious

Jason Resch jasonresch at gmail.com
Sun Mar 19 12:13:35 UTC 2023


On Sun, Mar 19, 2023, 2:04 AM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Consider that LLMs are like dictionaries. A complete dictionary can give
> you the definition of any word, but that definition is in terms of other
> words in the same dictionary. If you want to understand  *meaning* of any
> word definition,  you must look up the definitions of each word in the
> definition, and then look up each of the words in those definitions, which
> leads to an infinite regress.
>

Could a superb alien intelligence, if it happened upon a dictionary from
our civilization, having only words and no pictures inside, figure out what
all the words meant?

I tend to think they could. There is information in the structure of the
words and their interrelationships. Take every word's definition and it
would form a vast connected graph. Patterns in this graph would emerge and
in some places the aliens would see an analogy between the parts of this
graph and elements in the real world.

Perhaps it would be mathematical objects like circles spheres and cubes and
the five platonic solids or the examples of prime numbers or definition of
Pi, perhaps it would be physical objects like atoms, protons, electrons and
neutrons or the 92 naturally occurring elements. Perhaps it would be in
biological definitions of nucleic and amino acids, but somewhere in this
graph they would find an analogy with the real world. It would start off
slowly at first, but like solving a puzzle, each new word solved is a clue
that further makes solving the rest that much easier.


> Dictionaries do not actually contain or know the meanings of words, and I
> see no reason to think LLMs are any different.
>

Is a LLM more like a dictionary or more like the language center of the
human brain?

If the latter, then I could pose the following counterargument: "The human
brain's language center does know the meanings of words, and I see no
reason to think LLMs are any different."

To move forward, we need to answer:

1. What is meaning?
2. Do human brains contain meaning?
2. How is meaning present or inherent in the organization of neurons in the
human brain?
4. Can similar organizations that create meaning in the human brain be
found within LLMs?

Answering these questions is necessary to move forward. Otherwise we will
only go back and forth with some saying that LLMs are more like
dictionaries, and others saying LLMs are more like language processing
centers of human brains.

Jason


>
>
>
>
>  Sat, Mar 18, 2023, 3:39 AM Gordon Swobe <gordon.swobe at gmail.com> wrote:
>
>> I think those who think LLM  AIs like ChatGPT are becoming conscious or
>> sentient like humans fail to understand a very important point: these
>> software applications only predict language. They are very good at
>> predicting which word should come next in a sentence or question, but they
>> have no idea what the words mean. They do not and cannot understand what
>> the words refer to. In linguistic terms, they lack referents.
>>
>> Maybe you all already understand this, or maybe you have some reasons why
>> I am wrong.
>>
>> -gts
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230319/96fe2496/attachment.htm>


More information about the extropy-chat mailing list