[ExI] Ben Goertzel on Large Language Models
Gordon Swobe
gordon.swobe at gmail.com
Thu Apr 27 19:53:46 UTC 2023
Some of you might know or remember Ben Goerztzel. He is probably on ExI.
Many years ago, I subscribed to his AGI mailing list and still follow him
on facebook and twitter. I'm hardly in the same league with Ben on the
subject of AGI, but I notice that he has some of the same reservations
about LLMs that I have.
He notes the same grounding problem that I have been going on about here
for weeks now:
"LLMs ain't AGI and can't be upgraded into AGI, though they can be
components of AGI systems with real cognitive architectures and
reasoning/grounding ability."
He disagrees that GPT-4 shows the "sparks of AGI" in any meaningful way.
"Looking at how GPT4 works, you'd be crazy to think it could be taught or
improved or extended to be a true human level AGI."
He did a twitter poll and was surprised at how many people disagreed with
him:
"1 ) Well holy fukkazoly, I have to say I'm surprised by these results.
Yah it's unscientific but ... perplexed that half my self-described AI
expert followers think GPT-n systems w/o big additions could yield HLAGI.
OMG. Nooo wayyy. Theory of HLAGI urgently needed."
https://twitter.com/bengoertzel/status/1642802029071601665?s=20
-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/a9d7802e/attachment.htm>
More information about the extropy-chat
mailing list