[ExI] LLM's cannot be concious

Jason Resch jasonresch at gmail.com
Fri Mar 24 00:22:09 UTC 2023


On Thu, Mar 23, 2023, 7:33 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Thu, Mar 23, 2023 at 4:11 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Thu, Mar 23, 2023, 6:39 PM Adrian Tymes via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> On Thu, Mar 23, 2023 at 1:02 PM Jason Resch via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> Others had argued on this thread that it was impossible to extract
>>>> meaning from something that lacked referents. it seems you and I agree that
>>>> it is possible to extract meaning and understanding from a data set alone,
>>>> by virtue of the patterns and correlations present within that data.
>>>>
>>>
>>> With the caveat that referents are themselves data, so if we include
>>> appropriate referents in that data set then yes.  Referents are often
>>> referenced by their correlations and matching patterns.
>>>
>>
>> I don't understand what you are saying here.
>>
>
> Do you agree that referents are data?  If not, why not?
>


What is a referent? My understanding was that according to you and Adrian,
things like dictionaries and Wikipedia text lack referents since they are
just bodies of text.

My belief is that it doesn't matter. If there are scruitble patterns
present in the data, then an intelligence can find them and figure out how
to understand them.


> If they are data, then they - as data - can be included in a data set.
>
> You talked about "a data set alone", without specifying what that data set
> was.  In other words, that there exists such a data set.
>
> A data set that includes referents, is a data set that includes referents.
>

For clarity, could you give an example of a data set that includes
referents? I just want to ensure we're talking about the same thing.



> If it is possible to extract meaning from certain referents, then it is
> possible to extract meaning from a data set that includes those referents -
> specifically by extracting meaning from those referents, regardless of what
> else may or may not also be in that data set.
>
> This is probably not what you meant to say.  However, in practice, many
> data sets will include referents...even if it may take a while to find them.
>
> Again I refer to the "first contact" problem.  How does someone who
> encounters a people speaking an utterly different language, with no
> pre-existing translators or translations to reference, begin to establish
> communication with these people?
>

I gave an example of this, assuming I happened upon a dictionary in a
language I didn't recognize, I showed how you could exploit mathematical
definitions to find important constants, decode the numerical system, then
the periodic table, and work your way up various elements and compounds.
That would provide enough of a scaffolding to work out the rest. Lke a
puzzle, it gets easier with each next word that is solved.



Obviously it is possible, as human beings have done this very thing
> multiple times throughout history.  Consider that, and you will have the
> beginnings of how an AI that may include a LLM can come to truly understand
> words.  By definition of the problem, the answer lies outside of just words
> alone - and thus, outside of what something that is just a LLM can do.
>

If you look at this paper: https://arxiv.org/pdf/2303.12712.pdf

You will see early versions of gpt-4, despite not yet being trained on
images at the time, was still able to draw images of various objects in
various graphical languages. This shows that the LLM can learn more than
just words. It somehow gained an ability to picture things in its head.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/766417e1/attachment.htm>


More information about the extropy-chat mailing list