[ExI] Ben Goertzel on Large Language Models

Giovanni Santostasi gsantostasi at gmail.com
Thu Apr 27 21:18:11 UTC 2023


*"LLMs ain't AGI and can't be upgraded into AGI, though they can be
components of AGI systems with real cognitive architectures and
reasoning/grounding ability."*Gordon,
What this has to do with the grounding ability? Nothing.
In fact, I would agree with 90 % of the sentence (besides can't be upgraded
into AGI because we don't know yet).
Knowledge is not advanced by decree so it doesn't matter which authority
says what. But as authority goes Ben  Goerztzel is one the AI experts that
I do admire myself.
He claims we will reach AGI in less than 5 years.

It doesn't matter how this going to be achieved he seems more on the side
of people like Jason, Ben and I claiming we are close to achieving AGI,
while others think it is not possible for machines to be conscious, ever.
So I'm glad you brought him to the mix (even if indirectly).

Giovanni








On Thu, Apr 27, 2023 at 12:56 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Some of you might know or remember Ben Goerztzel. He is probably on ExI.
> Many years ago, I subscribed to his AGI mailing list and still follow him
> on facebook and twitter. I'm hardly in the same league with Ben on the
> subject of AGI, but I notice that he has some of the same reservations
> about LLMs that I have.
>
> He notes the same grounding problem that I have been going on about here
> for weeks now:
>
> "LLMs ain't AGI and can't be upgraded into AGI, though they can be
> components of AGI systems with real cognitive architectures and
> reasoning/grounding ability."
>
> He disagrees that GPT-4 shows the "sparks of AGI" in any meaningful way.
>
> "Looking at how GPT4 works, you'd be crazy to think it could be taught or
> improved or extended to be a true human level AGI."
>
> He did a twitter poll and was surprised at how many people disagreed with
> him:
>
> "1 ) Well holy fukkazoly, I have to say I'm surprised  by these results.
> Yah it's unscientific but ... perplexed that half my self-described AI
> expert followers think GPT-n systems w/o big additions could yield HLAGI.
> OMG.  Nooo wayyy.  Theory of HLAGI urgently needed."
>
> https://twitter.com/bengoertzel/status/1642802029071601665?s=20
>
> -gts
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/783013b5/attachment.htm>


More information about the extropy-chat mailing list