[ExI] Ben Goertzel on Large Language Models

Gordon Swobe gordon.swobe at gmail.com
Thu Apr 27 21:34:20 UTC 2023


On Thu, Apr 27, 2023 at 3:18 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

>
> *"LLMs ain't AGI and can't be upgraded into AGI, though they can be
> components of AGI systems with real cognitive architectures and
> reasoning/grounding ability."*Gordon,
> What this has to do with the grounding ability? Nothing.
>

Ben is saying that LLMs have no "reasoning/*grounding* ability," but can be
components of AGI systems that do.

You'll have to ask Ben how he thinks AGI systems will have grounding
ability, but it is clear that he believes LLMs do not have this ability and
I agree.

 (GPT-4 also agrees that it cannot solve the symbol grounding problem for
itself, but you call it a liar or a victim of brainwashing).

-gts

 -gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/490173cf/attachment.htm>


More information about the extropy-chat mailing list