[ExI] Ben Goertzel on Large Language Models
Darin Sunley
dsunley at gmail.com
Thu Apr 27 22:48:18 UTC 2023
Sounds right to me.
LLMs are a language cortex. The same way you understand your mother tongue,
preconsciously, that's how they understand everything.
The human neural architecture involves an agentic core of sensors and
instincts and memory, sitting under a modelling layer that can take
memories of sensory inputs and generate sensory inputs of hypothetical
situations, and a language cortex that sits on top of all that. LLMs are,
at best, the top layer of the stack and part of the second. They are
probably doing some environmental modeling, building implicit and not
particularly complete models based on regularities seen in their training
data (language that was generated by beings modeling stuff and describing
the process linguistically), but they aren't optimized for that, and are at
best ok at it. [Though to be fair, sometimes the miracle is that the bear
can ride a unicycle at all, nevermind how well.]
But a modelling cortex is not going to be meaningfully more complex than a
linguistic cortex - probably well within the scales we can train LLMs at
now. And hindbrain agentic stimulus response machines - we've been building
those forever now.
Yes, LLMs are not AGIs. And yes, LLMs cannot become AGIs, by piecewise
tinkering or evolution or achieving consciousness or any other woo.
But.
We are only perhaps one or two major breakthroughs in the use and
applications of the tools that build LLMs from someone tying all of those
layers together into something meaningfully more than the sum of it's parts.
On Thu, Apr 27, 2023 at 4:27 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> "To be clear -- we have enough of a theory of AGI already that it SHOULD
> [be] clear nothing with the sort of architecture that GPT-n systems have
> could really achieve HLAGI. But the abstract theory of AGI has not been
> fleshed out and articulated clearly enough in the HLAGI context. We need to
> articulate the intersection of abstract AGI theory with everyday human life
> and human-world practical tasks with sufficient clarity that only a tiny
> minority of AI experts will be confused enough to answer a question like
> this with YES ..."
>
> -Ben Goertzel
>
> https://twitter.com/bengoertzel/status/1642802030933856258?s=20
>
> -gts
>
>
>
> On Thu, Apr 27, 2023 at 3:43 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Thu, Apr 27, 2023, 2:20 PM Giovanni Santostasi via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>> *"LLMs ain't AGI and can't be upgraded into AGI, though they can be
>>> components of AGI systems with real cognitive architectures and
>>> reasoning/grounding ability."*Gordon,
>>> What this has to do with the grounding ability? Nothing.
>>> In fact, I would agree with 90 % of the sentence (besides can't be
>>> upgraded into AGI because we don't know yet).
>>>
>>
>> I would go further and say it is self-contradictory. If it can be a
>> component of an AGI system, then adding the rest of the AGI system to a LLM
>> is a considerable upgrade - and so, as an upgrade, would upgrade that LLM
>> to an AGI.
>>
>>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/457becbd/attachment-0001.htm>
More information about the extropy-chat
mailing list