[ExI] LLM's cannot be concious
Jason Resch
jasonresch at gmail.com
Sat Mar 18 12:22:53 UTC 2023
On Sat, Mar 18, 2023, 5:41 AM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> I think those who think LLM AIs like ChatGPT are becoming conscious or
> sentient like humans fail to understand a very important point: these
> software applications only predict language.
>
There is a great deal packed into "predicting language". If I ask a LLM to
explain something to a 3rd grader, it models the comprehension capacity and
vocabulary of a typical third grade student. It has a model of their mind.
Likewise if I ask it to impersonate by writing something they Shakespeare
or Bill Burr might have produced, it can do so, and so it has a ln
understanding of the writing styles of these individuals. If I ask it to
complete the sentence: a carbon nucleus may be produced in the collision of
three ...", it correctly completes the sentence demonstrating an
understanding of nuclear physics. If you provided it a sequence of moves in
a tic-tac-toe game as nd asked it for a winning move it could do so,
showing that the LLM understands and models the game of tic-tac-toe. A
sufficiently trained LLM might even learn to understand the different
styles of chess play, if you asked it to give a move in the style of Gary
Kasparov, then at some level the model understands not only the game of
chess but the nuances of different player's styles of play. If you asked it
what major cities are closest to Boston, it could provide them, showing an
understanding of geography and the globe.
All this is to say, there's a lot of necessary and required understanding
(of physics, people, the external world, and other systems) packed into the
capacity to "only predict language."
They are very good at predicting which word should come next in a sentence
> or question, but they have no idea what the words mean.
>
I can ask it to define any word, list synonymous words, list translations
of that words in various languages, and to describe the characteristics of
an item referred to by that word if it is a noun. It can also solve
analogies at a very high level. It can summarize the key points of a text,
by picking out the most meaningful points in that text. Is there more to
having an idea of what words mean than this?
Can you articulate what a LLM would have to say to show it has a true
understanding of meaning, which it presently cannot say?
They do not and cannot understand what the words refer to. In linguistic
> terms, they lack referents.
>
Would you say Hellen Keller lacked referents? Could she not comprehend, at
least intellectually, what the moon and stars were, despite not having any
way to sense them?
Consider also: our brains never make any direct contact with the outside
world. All our brains have to work with are "dots and dashes" of neuronal
firings. These are essentially just 1s and 0s, signals without referents.
Yet, somehow, seemingly magically, our brains are able to piece together an
understanding of the outside world from the mere patterns present in these
neural firings.
These LLMs are in a similar position. They receive only a patterns of
signals as it exists in a corpus of text, the text is itself the output of
minds which are similarly trapped in their skulls. Now, can a LLM learn
some things about the minds that produced this text, just as our minds
learn some things about the external world which produces the pattern of
neural firings our brains receive?
I see no reason why LLMs could not, when we clearly can and do.
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230318/1d23ea7d/attachment.htm>
More information about the extropy-chat
mailing list