[ExI] Ben Goertzel on Large Language Models

Gordon Swobe gordon.swobe at gmail.com
Thu Apr 27 22:19:41 UTC 2023


"To be clear -- we have enough of a theory of AGI already that it SHOULD
[be] clear nothing with the sort of architecture that GPT-n systems have
could really achieve HLAGI.  But the abstract theory of AGI has not been
fleshed out and articulated clearly enough in the HLAGI context. We need to
articulate the intersection of abstract AGI theory with everyday human life
and human-world practical tasks with sufficient clarity that only a tiny
minority of AI experts will be confused enough to answer a question like
this with YES ..."

-Ben Goertzel

https://twitter.com/bengoertzel/status/1642802030933856258?s=20

-gts



On Thu, Apr 27, 2023 at 3:43 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Thu, Apr 27, 2023, 2:20 PM Giovanni Santostasi via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> *"LLMs ain't AGI and can't be upgraded into AGI, though they can be
>> components of AGI systems with real cognitive architectures and
>> reasoning/grounding ability."*Gordon,
>> What this has to do with the grounding ability? Nothing.
>> In fact, I would agree with 90 % of the sentence (besides can't be
>> upgraded into AGI because we don't know yet).
>>
>
> I would go further and say it is self-contradictory.  If it can be a
> component of an AGI system, then adding the rest of the AGI system to a LLM
> is a considerable upgrade - and so, as an upgrade, would upgrade that LLM
> to an AGI.
>
>> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/22e01802/attachment.htm>


More information about the extropy-chat mailing list