[ExI] Ben Goertzel on Large Language Models
Gordon Swobe
gordon.swobe at gmail.com
Fri Apr 28 06:54:28 UTC 2023
On Fri, Apr 28, 2023 at 12:36 AM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:
>
> * grounding in the sense meant by philosophers*Philosophers are
> completely useless in this discussion.
>
It is more than mere philosophy. The symbol grounding problem is one of the
central challenges in AI. This is why Ben Goertzel mentioned it as one
of the reasons he thinks an LLM cannot be the core of AGI.
> You are simply begging the question by repeating that the AI cannot have
a self of awareness or real intelligence because it has no
physical referents.
Referents needn't be physical objects, and in fact in the final analysis,
even the referents of physical objects are subjective. I explained
many times, but you don't care to listen.
-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230428/6ac9e80b/attachment.htm>
More information about the extropy-chat
mailing list