[ExI] LLM's cannot be concious

Adrian Tymes atymes at gmail.com
Thu Mar 23 23:23:41 UTC 2023


On Thu, Mar 23, 2023 at 4:11 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Thu, Mar 23, 2023, 6:39 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Thu, Mar 23, 2023 at 1:02 PM Jason Resch via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> Others had argued on this thread that it was impossible to extract
>>> meaning from something that lacked referents. it seems you and I agree that
>>> it is possible to extract meaning and understanding from a data set alone,
>>> by virtue of the patterns and correlations present within that data.
>>>
>>
>> With the caveat that referents are themselves data, so if we include
>> appropriate referents in that data set then yes.  Referents are often
>> referenced by their correlations and matching patterns.
>>
>
> I don't understand what you are saying here.
>

Do you agree that referents are data?  If not, why not?

If they are data, then they - as data - can be included in a data set.

You talked about "a data set alone", without specifying what that data set
was.  In other words, that there exists such a data set.

A data set that includes referents, is a data set that includes referents.

If it is possible to extract meaning from certain referents, then it is
possible to extract meaning from a data set that includes those referents -
specifically by extracting meaning from those referents, regardless of what
else may or may not also be in that data set.

This is probably not what you meant to say.  However, in practice, many
data sets will include referents...even if it may take a while to find them.

Again I refer to the "first contact" problem.  How does someone who
encounters a people speaking an utterly different language, with no
pre-existing translators or translations to reference, begin to establish
communication with these people?  Obviously it is possible, as human beings
have done this very thing multiple times throughout history.  Consider
that, and you will have the beginnings of how an AI that may include a LLM
can come to truly understand words.  By definition of the problem, the
answer lies outside of just words alone - and thus, outside of what
something that is just a LLM can do.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/933a86c4/attachment.htm>


More information about the extropy-chat mailing list