[ExI] Ben Goertzel on Large Language Models
Darin Sunley
dsunley at gmail.com
Fri Apr 28 14:49:02 UTC 2023
>From https://twitter.com/EthicsInBricks/status/1225555543357632512
PhD student, c.2020:
Here’s a limited argument I made based on years of specialized research.
Hope it’s OK
Philosopher dude, c.1770:
Here are some Thoughts I had in the Bath. They constitute Universal &
Self-Evident Laws of Nature. FIGHT ME.
On Fri, Apr 28, 2023 at 12:56 AM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Fri, Apr 28, 2023 at 12:36 AM Giovanni Santostasi <
> gsantostasi at gmail.com> wrote:
>
>>
>> * grounding in the sense meant by philosophers*Philosophers are
>> completely useless in this discussion.
>>
>
> It is more than mere philosophy. The symbol grounding problem is one of
> the central challenges in AI. This is why Ben Goertzel mentioned it as one
> of the reasons he thinks an LLM cannot be the core of AGI.
>
> > You are simply begging the question by repeating that the AI cannot have
> a self of awareness or real intelligence because it has no
> physical referents.
>
> Referents needn't be physical objects, and in fact in the final analysis,
> even the referents of physical objects are subjective. I explained
> many times, but you don't care to listen.
>
> -gts
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230428/57f9faae/attachment-0001.htm>
More information about the extropy-chat
mailing list