[ExI] LLM's cannot be concious
Jason Resch
jasonresch at gmail.com
Thu Mar 23 19:59:15 UTC 2023
On Thu, Mar 23, 2023, 3:12 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Thu, Mar 23, 2023 at 11:59 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Thu, Mar 23, 2023, 2:37 PM Adrian Tymes via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> On Thu, Mar 23, 2023 at 11:09 AM Jason Resch <jasonresch at gmail.com>
>>> wrote:
>>>
>>>> Take all the neural impulses from the sense organs a human brain
>>>> receives from birth to age 25 as a huge list of tuples in the format:
>>>> (neuron id, time-stamp). This is ultimately just a list of numbers. But
>>>> present in these numbers exists the capacity for a brain to learn and know
>>>> everything a 25-year-old comes to learn and know about the world. If a
>>>> human brain can do this from this kind of raw, untagged, "referentless"
>>>> data alone, then why can't a machine?
>>>>
>>>
>>> "A machine" can, if it is the right kind of machine.
>>>
>>
>> Then you would agree with me that patterns and correlations alone within
>> an unlabeled dataset are sufficient to bootstrap meaning and understanding
>> for a sufficient intelligence?
>>
>
> Again: the error comes in categorizing which kind of "sufficient
> intelligence".\
>
Acknowledged. Others had argued on this thread that it was impossible to
extract meaning from something that lacked referents. it seems you and I
agree that it is possible to extract meaning and understanding from a data
set alone, by virtue of the patterns and correlations present within that
data.
I am not convinced a massive brain is required to learn meaning. My AI bots
start with completely randomly weighted neural networks of just a dozen or
so neurons. In just a few generations they learn that "food is good" and
"poison is bad". Survival fitness tests are all that is needed for them to
learn that lesson. Do their trained neural nets reach some understanding
that green means good and red means bad? They certainly behave as if they
have that understanding, but the only data they are given is "meaningless
numbers" representing inputs to their neurons.
> Just because one type of AI could do a task does not mean that all AIs are
> capable of that task. You keep invoking the general case, where an AI that
> is capable is part of a superset, then wondering why there is disagreement
> about a specific case, discussing a more limited subset that only contains
> other AIs.
>
There was a general claim that no intelligence, however great, could learn
meaning from a dictionary (or other data set like Wikipedia or list of
neural impulses timings) as these data "lack referents". If we agree that
an appropriate intelligence can attain meaning and understanding then we
can drop this point.
>
>> A pure LLM like the ones we have been discussing is not the right kind of
>>> machine.
>>>
>>
>> That's an assertion but you do not offer a justification. Why is a LLM
>> not the right kind of machine and what kind of machine is needed?
>>
>
> As posted previously, the right kind of machine might incorporate a LLM,
> but not consist only of a LLM (in other words, be a "pure LLM"). More
> capabilities than just a LLM are necessary.
>
Like what?
Note that the type of intelligence required of a LLM is a universal kind:
predicting the next symbols to follow given a sample of preceding symbols
requires general and universal intelligence. (
https://static.aminer.org/pdf/PDF/000/014/009/text_compression_as_a_test_for_artificial_intelligence.pdf
). Intelligence, ultimately, is all about prediction. See also:
https://en.m.wikipedia.org/wiki/AIXI
There is no task requiring intelligence that a sufficiently large LLM could
not learn to do as part of learning symbol prediction. Accordingly, saying
a LLM is a machine that could never learn to do X, or understand Y, is a
bit like someone saying a particular Turing machine could never run the
program Z.
If it's a problem that can be solved by intelligence, then the LLM
architecture, given enough training and enough neurons, can learn to do it.
Neural networks are themselves universal in what functions they can learn
to solve:
https://towardsdatascience.com/can-neural-networks-really-learn-any-function-65e106617fc6
This is why I tend to doubt claims of inability concerning these networks
absent some justification. For example, if you could show the 100 trillion
neurons in GPT-4s brain is not enough to understand English because
understanding English requires 200 trillion neurons (for some reason), that
would be something. But even then they would not say anything about the
limits of the LLM architecture, just the limits of GPT-4.
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/1a93355e/attachment.htm>
More information about the extropy-chat
mailing list