<div dir="ltr"><div dir="ltr">On Wed, Apr 26, 2023 at 10:58 AM Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><blockquote type="cite"><div><div class="gmail_quote"><div dir="auto">I wrote to you that in my opinion you were
conflating linguistics and neuroscience. </div>
<div dir="auto"><br>
</div>
<div dir="auto">Actually, you went further than that, arguing
that linguistics is not even the correct discipline. But
you were supposedly refuting my recent argument which is
entirely about what linguistics — the science of language —
can inform us about language models.</div>
<div dir="auto"><br>
</div>
<div dir="auto">-gts</div>
</div>
</div>
</blockquote>
<br>
<br>
Yes, prior to my question. Which has a point. But you are still
dodging it.</div></blockquote><div><br></div><div>I simply have no interest in it. You want to make an argument from neuroscience that somehow refutes my claim that a language model running on a digital computer cannot know the meanings of the words in the corpus on which it is trained as it has no access to the referents from which words derive their meanings. Your arguments about neuroscience are interesting, but I am not arguing that humans have no access to referents or that humans do not know the meanings of words or that your explanation in terms of neuroscience might have relevance to the question of how humans understand words. <br><br>Computers have no human brains, or sense organs for that matter which are required for symbols to be grounded to be their referents, and so the question remains how a language model running on a digital computer could possibly know the meanings of words in the corpus. But you say I am the one dodging the question.<br><br>-gts</div><div> </div></div></div>