[ExI] Symbol Grounding

Gordon Swobe gordon.swobe at gmail.com
Wed Apr 26 17:25:13 UTC 2023


On Wed, Apr 26, 2023 at 10:58 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I wrote to you that in my opinion you were conflating linguistics and
> neuroscience.
>
> Actually, you went further than that, arguing that linguistics is not even
> the correct discipline.  But you were supposedly refuting my recent
> argument which is entirely about what linguistics — the science of language
> — can inform us about language models.
>
> -gts
>
>
>
> Yes, prior to my question. Which has a point. But you are still dodging it.
>

I simply have no interest in it. You want to make an argument from
neuroscience that somehow refutes my claim that a language model running on
a digital computer cannot know the meanings of the words in the corpus on
which it is trained as it has no access to the referents from which words
derive their meanings. Your arguments about neuroscience are interesting,
but I am not arguing that humans have no access to referents or that humans
do not know the meanings of words or that your explanation in terms of
neuroscience might have relevance to the question of how humans understand
words.

Computers have no human brains, or sense organs for that matter which are
required for symbols to be grounded to be their referents, and so the
question remains how a language model running on a digital computer could
possibly know the meanings of words in the corpus. But you say I am the one
dodging the question.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230426/b527f3be/attachment.htm>


More information about the extropy-chat mailing list