[ExI] LLM's cannot be concious

Jason Resch jasonresch at gmail.com
Thu Mar 23 18:58:11 UTC 2023


On Thu, Mar 23, 2023, 2:37 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Thu, Mar 23, 2023 at 11:09 AM Jason Resch <jasonresch at gmail.com> wrote:
>
>> Take all the neural impulses from the sense organs a human brain receives
>> from birth to age 25 as a huge list of tuples in the format: (neuron id,
>> time-stamp). This is ultimately just a list of numbers. But present in
>> these numbers exists the capacity for a brain to learn and know everything
>> a 25-year-old comes to learn and know about the world. If a human brain can
>> do this from this kind of raw, untagged, "referentless" data alone, then
>> why can't a machine?
>>
>
> "A machine" can, if it is the right kind of machine.
>

Then you would agree with me that patterns and correlations alone within an
unlabeled dataset are sufficient to bootstrap meaning and understanding for
a sufficient intelligence?


> A pure LLM like the ones we have been discussing is not the right kind of
> machine.
>

That's an assertion but you do not offer a justification. Why is a LLM not
the right kind of machine and what kind of machine is needed?

A pure LLM handles much less kinds of data than a human brain does.
>

The human brain only handles one kind of data: neural impulses.

If you think images are important, you should know that GPT-4 was trained
on both images and text. ( https://openai.com/research/gpt-4 )

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/1a0ccc4c/attachment.htm>


More information about the extropy-chat mailing list