[ExI] LLM's cannot be concious

Stuart LaForge avant at sollegro.com
Thu Mar 23 00:47:46 UTC 2023


Quoting Gordon Swobe via extropy-chat <extropy-chat at lists.extropy.org>:

> On Tue, Mar 21, 2023 at 6:43 AM Jason Resch <jasonresch at gmail.com> wrote:
>
>>
>> I address this elsewhere in the thread. A sufficient intelligence given
>> only a dictionary, could eventually decode it's meaning. I provided an
>> example of how it could be done.
>>
>
>
> I saw that, and I disagree. I think if you try to work out an example in
> your head, you will see that it leads to an infinite regression, an endless
> search for meaning. Like ChatGPT, you will learn which word symbols define
> each other word symbol, and you learn the rules of language (the syntax),
> but from the dictionary alone you will never learn the actual meaning of
> the words (the referents).

Look on the bright side, Gordan. You are guaranteed to at least learn  
the actual meaning of the word dictionary from a dictionary alone. As  
well as the actual meaning of the word sentence. You will even learn  
the actual meaning of the word "word". There are plenty of referents  
in a dictionary if you know what to look for. ;)

>
> Try it with any word you please. You rapidly have a massive list of words
> for which you have no meaning and for which you much keep looking up
> definitions finding more words for which you have no meaning, and in your
> list you also have many common words (like "the" and "a") that lead to
> endless loops in your search for meaning.
>
Many words don't have a meaning on their own or too many meanings and  
their meaning depends almost entirely on where, when, and how they are  
used.

Stuart LaForge






More information about the extropy-chat mailing list