<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
On 19/04/2023 15:11, Gordon Swobe wrote:<br>
<blockquote type="cite"
cite="mid:mailman.475.1681913463.847.extropy-chat@lists.extropy.org"><br>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div class="moz-text-html" lang="x-unicode">
<div dir="ltr">
<div>Ben, I can't locate the message, but you asked my
thoughts on the difference between a language model solving
what you have called the word association problem and its
solving the symbol grounding problem. In my view, the
difference lies in the fact that understanding statistical
associations between words does not require knowledge of
their meanings. While this distinction might not make a
practical difference, it becomes important if the question
is whether the model genuinely understands the content of
its inputs and outputs or merely simulates that
understanding.</div>
</div>
</div>
</blockquote>
<br>
<br>
What are you telling us Gordon??!!<br>
<br>
Exams are useless! Oh. My. Dog.<br>
<br>
All those exams!!<br>
<br>
I'm going to need therapy.<br>
<br>
All those exams completely failed to determine if my understanding
was real or just simulated!<br>
<br>
Watch out, guys, the Degree Police are coming for us!<br>
<br>
Surely, Gordon, there must be some test to tell us if our
understanding is real or simulated?<br>
<br>
Oh, wait, you said "this distinction might not make a practical
difference". Might not? Well, we should pray to our canine
slavemaster in the sky that it doesn't!<br>
<br>
Because, to be honest, I kind of suspect that /all/ my
understanding, of everything, is merely simulated. In fact, I think
even my perception of the colour Red might be simulated.<br>
<br>
I might as well turn myself in right now. <Sob>.<br>
<br>
<br>
Aaaanyway, having got that out of my system, I do believe you've
twisted my words somewhat, and I wasn't referring specifically to
LLMs, but information-processing systems in general, and
particularly human brains. I was trying to point out that the
'symbol grounding problem' is solved by considering the associations
between different models and processes in such systems, and you even
agreed with me that when people use 'referents' they are using the
internal models of things, not referring directly to the outside
world (which is impossible, I don't remember if you explicitly
agreed to this as well, but I think so). Therefore 'symbol
grounding' = associating internal models with linguistic tokens. I
said I don't know how LLMs work, or whether they use such internal
models.<br>
<br>
I also pointed out that these models can be constructed from any
signals that have consistent associations with sensory inputs, and
could be the result of any process that inputs data (including
text).<br>
<br>
Now it may be that 'understanding' does require these internal
models, and it may be that LMMs don't have them. As I said, I don't
know, and am not making any claims about either thing. So, just for
the record, I'm not one of these 'Zealots' you seem to have
constructed an internal model of (remember what I said: just because
you have a model of something, that thing doesn't have to actually
be real).<br>
<br>
In my view, you are correct that "understanding statistical
associations between words does not require knowledge of their
meanings". That's hardly a controversial position. But that's not to
say that understanding statistical associations between words cannot
<i>lead</i> to knowledge of their meanings. Several people have
already given you several examples of how it can.<br>
<br>
My little ramble above deals with the difference between genuinely
understanding something and merely simulating the understanding.<br>
<br>
(I think we should also be on our guard against systems simulating
addition, as opposed to genuinely adding, not to mention a few other
things).<br>
<br>
Ben<br>
</body>
</html>