[ExI] e: GPT-4 on its inability to solve the symbol grounding problem
Ben Zaiboc
ben at zaiboc.net
Wed Apr 19 17:18:23 UTC 2023
On 19/04/2023 15:11, Gordon Swobe wrote:
>
> Ben, I can't locate the message, but you asked my thoughts on the
> difference between a language model solving what you have called the
> word association problem and its solving the symbol grounding
> problem. In my view, the difference lies in the fact that
> understanding statistical associations between words does not require
> knowledge of their meanings. While this distinction might not make a
> practical difference, it becomes important if the question is whether
> the model genuinely understands the content of its inputs and outputs
> or merely simulates that understanding.
What are you telling us Gordon??!!
Exams are useless! Oh. My. Dog.
All those exams!!
I'm going to need therapy.
All those exams completely failed to determine if my understanding was
real or just simulated!
Watch out, guys, the Degree Police are coming for us!
Surely, Gordon, there must be some test to tell us if our understanding
is real or simulated?
Oh, wait, you said "this distinction might not make a practical
difference". Might not? Well, we should pray to our canine slavemaster
in the sky that it doesn't!
Because, to be honest, I kind of suspect that /all/ my understanding, of
everything, is merely simulated. In fact, I think even my perception of
the colour Red might be simulated.
I might as well turn myself in right now. <Sob>.
Aaaanyway, having got that out of my system, I do believe you've twisted
my words somewhat, and I wasn't referring specifically to LLMs, but
information-processing systems in general, and particularly human
brains. I was trying to point out that the 'symbol grounding problem' is
solved by considering the associations between different models and
processes in such systems, and you even agreed with me that when people
use 'referents' they are using the internal models of things, not
referring directly to the outside world (which is impossible, I don't
remember if you explicitly agreed to this as well, but I think so).
Therefore 'symbol grounding' = associating internal models with
linguistic tokens. I said I don't know how LLMs work, or whether they
use such internal models.
I also pointed out that these models can be constructed from any signals
that have consistent associations with sensory inputs, and could be the
result of any process that inputs data (including text).
Now it may be that 'understanding' does require these internal models,
and it may be that LMMs don't have them. As I said, I don't know, and am
not making any claims about either thing. So, just for the record, I'm
not one of these 'Zealots' you seem to have constructed an internal
model of (remember what I said: just because you have a model of
something, that thing doesn't have to actually be real).
In my view, you are correct that "understanding statistical associations
between words does not require knowledge of their meanings". That's
hardly a controversial position. But that's not to say that
understanding statistical associations between words cannot /lead/ to
knowledge of their meanings. Several people have already given you
several examples of how it can.
My little ramble above deals with the difference between genuinely
understanding something and merely simulating the understanding.
(I think we should also be on our guard against systems simulating
addition, as opposed to genuinely adding, not to mention a few other
things).
Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230419/2b3d9e39/attachment.htm>
More information about the extropy-chat
mailing list