[ExI] Ben Goertzel on Large Language Models

Gordon Swobe gordon.swobe at gmail.com
Fri Apr 28 05:30:07 UTC 2023


On Thu, Apr 27, 2023 at 11:06 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

> Gordon,
> Please listen to this video.
> At the end (33''), Lemoine explicitly addresses the people that say these
> models only predict the next word. He says it is technically incorrect and
> he explains why. Notice he even uses the word "grounded" to explain that
> these systems actually have other knowledge sources to infer the best way
> to answer a query.
>
> https://www.youtube.com/watch?v=d9ipv6HhuWM
>

I had already watched it and did not notice any intelligent discussion
about the grounding problem, so I went back and watched it again at the
section you cited. His words to the effect of "being grounded in other
informational backends, knowledge graphs [etc]" is not what is meant by
grounding in the sense meant by philosophers. Grounding is about how
symbols are grounded in or to experience, not merely to yet more symbolic
information.

By the way, I can see why some people suggested he seek help from mental
health professionals, and why Google was inclined to let him go. As I
understand the story, he went to his superiors or to HR and pleaded on the
behalf of his little friend inside the computer who supposedly has real
emotions and a soul.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/29c05e36/attachment.htm>


More information about the extropy-chat mailing list