[ExI] Ben Goertzel on Large Language Models

Giovanni Santostasi gsantostasi at gmail.com
Thu Apr 27 22:39:06 UTC 2023


Here is an interview with Ben where he discusses these topics. It does say
that LLM are limited but he also claims that they could do basically 98 %
of what humans do besides things like inventing Jazz and QM.
He also says that one can add to the LLMs to reach AGI.
He is not that dismissive at all just emphasizing that a few more modules
are needed. That is the same thing I have said. The main point is actually
how close Ben thinks we are to achieving AGI, which he claims is 5 years
away.
https://www.youtube.com/watch?v=MVWzwIg4Adw




On Thu, Apr 27, 2023 at 3:27 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> "To be clear -- we have enough of a theory of AGI already that it SHOULD
> [be] clear nothing with the sort of architecture that GPT-n systems have
> could really achieve HLAGI.  But the abstract theory of AGI has not been
> fleshed out and articulated clearly enough in the HLAGI context. We need to
> articulate the intersection of abstract AGI theory with everyday human life
> and human-world practical tasks with sufficient clarity that only a tiny
> minority of AI experts will be confused enough to answer a question like
> this with YES ..."
>
> -Ben Goertzel
>
> https://twitter.com/bengoertzel/status/1642802030933856258?s=20
>
> -gts
>
>
>
> On Thu, Apr 27, 2023 at 3:43 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Thu, Apr 27, 2023, 2:20 PM Giovanni Santostasi via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>> *"LLMs ain't AGI and can't be upgraded into AGI, though they can be
>>> components of AGI systems with real cognitive architectures and
>>> reasoning/grounding ability."*Gordon,
>>> What this has to do with the grounding ability? Nothing.
>>> In fact, I would agree with 90 % of the sentence (besides can't be
>>> upgraded into AGI because we don't know yet).
>>>
>>
>> I would go further and say it is self-contradictory.  If it can be a
>> component of an AGI system, then adding the rest of the AGI system to a LLM
>> is a considerable upgrade - and so, as an upgrade, would upgrade that LLM
>> to an AGI.
>>
>>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/8a4dc89c/attachment.htm>


More information about the extropy-chat mailing list