[ExI] LLM's cannot be concious
Jason Resch
jasonresch at gmail.com
Tue Mar 21 12:43:20 UTC 2023
On Tue, Mar 21, 2023, 2:32 AM Gordon Swobe <gordon.swobe at gmail.com> wrote:
> They are very good at predicting which word should come next in a sentence
>> or question, but they have no idea what the words mean.
>>
>
> > I can ask it to define any word, list synonymous words, list
> translations of that words in various languages, and to describe the
> characteristics of an item referred to by that word if it is a noun. It can
> also solve analogies at a very high level. It can summarize the key points
> of a text, by picking out the most meaningful points in that text. Is there
> more to having an idea of what words mean than this?
>
> Yes, I would say absolutely there is more to it. Consider that a
> dictionary does not actually contain any word meanings. It contains
> definitions of words in terms of other words, and each of those words are
> defined by other words in the same dictionary. Starting from a place of
> ignorance in which one knows no meanings of any words, no amount dictionary
> research will reveal the meanings of any of the words defined within it.
> It's all symbols with no referents.
>
I address this elsewhere in the thread. A sufficient intelligence given
only a dictionary, could eventually decode it's meaning. I provided an
example of how it could be done.
>
> I am saying that Large Language Models like ChatGPT are no different.
> These LLMs are nothing more than advanced interactive dictionaries capable
> only of rendering definitions and logical patterns of definitions. They can
> perhaps even seem to render "original thought" but there is no conscious
> mind in the model holding the meaning of any thought in mind, as there are
> no referents.
>
Where does meaning exist in the human brain? The brain is ultimately a
collection of statistical weights across all the neuronal connections.
Jason
> -gts
>
>
> On Sat, Mar 18, 2023 at 6:23 AM Jason Resch <jasonresch at gmail.com> wrote:
>
>>
>>
>> On Sat, Mar 18, 2023, 5:41 AM Gordon Swobe via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> I think those who think LLM AIs like ChatGPT are becoming conscious or
>>> sentient like humans fail to understand a very important point: these
>>> software applications only predict language.
>>>
>>
>> There is a great deal packed into "predicting language". If I ask a LLM
>> to explain something to a 3rd grader, it models the comprehension capacity
>> and vocabulary of a typical third grade student. It has a model of their
>> mind. Likewise if I ask it to impersonate by writing something they
>> Shakespeare or Bill Burr might have produced, it can do so, and so it has a
>> ln understanding of the writing styles of these individuals. If I ask it to
>> complete the sentence: a carbon nucleus may be produced in the collision of
>> three ...", it correctly completes the sentence demonstrating an
>> understanding of nuclear physics. If you provided it a sequence of moves in
>> a tic-tac-toe game as nd asked it for a winning move it could do so,
>> showing that the LLM understands and models the game of tic-tac-toe. A
>> sufficiently trained LLM might even learn to understand the different
>> styles of chess play, if you asked it to give a move in the style of Gary
>> Kasparov, then at some level the model understands not only the game of
>> chess but the nuances of different player's styles of play. If you asked it
>> what major cities are closest to Boston, it could provide them, showing an
>> understanding of geography and the globe.
>>
>> All this is to say, there's a lot of necessary and required understanding
>> (of physics, people, the external world, and other systems) packed into the
>> capacity to "only predict language."
>>
>>
>> They are very good at predicting which word should come next in a
>>> sentence or question, but they have no idea what the words mean.
>>>
>>
>> I can ask it to define any word, list synonymous words, list translations
>> of that words in various languages, and to describe the characteristics of
>> an item referred to by that word if it is a noun. It can also solve
>> analogies at a very high level. It can summarize the key points of a text,
>> by picking out the most meaningful points in that text. Is there more to
>> having an idea of what words mean than this?
>>
>> Can you articulate what a LLM would have to say to show it has a true
>> understanding of meaning, which it presently cannot say?
>>
>> They do not and cannot understand what the words refer to. In linguistic
>>> terms, they lack referents.
>>>
>>
>> Would you say Hellen Keller lacked referents? Could she not comprehend,
>> at least intellectually, what the moon and stars were, despite not having
>> any way to sense them?
>>
>> Consider also: our brains never make any direct contact with the outside
>> world. All our brains have to work with are "dots and dashes" of neuronal
>> firings. These are essentially just 1s and 0s, signals without referents.
>> Yet, somehow, seemingly magically, our brains are able to piece together an
>> understanding of the outside world from the mere patterns present in these
>> neural firings.
>>
>> These LLMs are in a similar position. They receive only a patterns of
>> signals as it exists in a corpus of text, the text is itself the output of
>> minds which are similarly trapped in their skulls. Now, can a LLM learn
>> some things about the minds that produced this text, just as our minds
>> learn some things about the external world which produces the pattern of
>> neural firings our brains receive?
>>
>> I see no reason why LLMs could not, when we clearly can and do.
>>
>> Jason
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230321/0cc33507/attachment.htm>
More information about the extropy-chat
mailing list