[ExI] LLM's cannot be concious

Adrian Tymes atymes at gmail.com
Thu Mar 23 22:29:55 UTC 2023


On Thu, Mar 23, 2023 at 1:02 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Others had argued on this thread that it was impossible to extract meaning
> from something that lacked referents. it seems you and I agree that it is
> possible to extract meaning and understanding from a data set alone, by
> virtue of the patterns and correlations present within that data.
>

With the caveat that referents are themselves data, so if we include
appropriate referents in that data set then yes.  Referents are often
referenced by their correlations and matching patterns.


>
> I am not convinced a massive brain is required to learn meaning. My AI
> bots start with completely randomly weighted neural networks of just a
> dozen or so neurons. In just a few generations they learn that "food is
> good" and "poison is bad". Survival fitness tests are all that is needed
> for them to learn that lesson. Do their trained neural nets reach some
> understanding that green means good and red means bad? They certainly
> behave as if they have that understanding, but the only data they are given
> is "meaningless numbers" representing inputs to their neurons.
>
>
>
>> Just because one type of AI could do a task does not mean that all AIs
>> are capable of that task.  You keep invoking the general case, where an AI
>> that is capable is part of a superset, then wondering why there is
>> disagreement about a specific case, discussing a more limited subset that
>> only contains other AIs.
>>
>
> There was a general claim that no intelligence, however great, could learn
> meaning from a dictionary (or other data set like Wikipedia or list of
> neural impulses timings) as these data "lack referents". If we agree that
> an appropriate intelligence can attain meaning and understanding then we
> can drop this point.
>

I recall that the claim was about "no (pure) LLM", not "no (general)
intelligence".

Also there  is  a substantial distinction between a dictionary or
Wikipedia, and any list of neural impulses.  A pure LLM might only be able
to consult a dictionary or Wikipedia (pictures included); a general
intelligence might be able to process neural impulses.


> There is no task requiring intelligence that a sufficiently large LLM
> could not learn to do as part of learning symbol prediction. Accordingly,
> saying a LLM is a machine that could never learn to do X, or understand Y,
> is a bit like someone saying a particular Turing machine could never run
> the program Z.
>

And indeed there are some programs that certain Turing machines are unable
to run.  For example, if a Turing machine contains no randomizer and no way
to access random data, it is unable to run a program where one of the steps
requires true randomness.  Much has been written about the limits of
psuedorandom generators; I defer to that literature to establish that those
are meaningfully distinct from truly random things, at least under common
circumstances of significance.

One problem is defining when an AI has grown to be more than just a LLM.
What is just a LLM, however large, and what is not just a LLM (whether or
not it includes a LLM)?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/cf12966a/attachment.htm>


More information about the extropy-chat mailing list