<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Apr 14, 2023, 6:07 PM Gordon Swobe <<a href="mailto:gordon.swobe@gmail.com">gordon.swobe@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr">On Thu, Apr 13, 2023 at 4:09 PM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br><br><br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>Imagine a machine that searches for a counterexample to <a href="https://en.wikipedia.org/wiki/Goldbach%27s_conjecture" target="_blank" rel="noreferrer">Goldbach's conjecture</a> .... So, we arguably have a property here which is true for the program: it either halts or doesn't, but one which is inaccessible to us even when we know everything there is to know about the code itself.</div></div></div></blockquote><div><br>Interesting, yes.<br></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Do you think this could open the door to first person properties which are not understandable from their third person descriptions?</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br> > You were making the argument that because GPT can "understand" English words about mathematical relationships and translate them into the language of mathematics and even draw diagrams of houses and so on, that this was evidence that it had solved the grounding problem for itself with respect to mathematics. Is that still your contention?</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">I am not sure I know what you mean by "it has solved the symbol grounding problem for itself". To avoid the potential for confusion resulting from my misunderstanding that phrase, I should clarify:</div><div dir="auto"><br></div><div dir="auto">I believe GPT-4 has connected (i.e. grounded) the meaning of at least some English words (symbols) to their mathematical meaning (the raw structures and relations that constitute all math is).</div><div dir="auto"><br></div><div dir="auto">If that counts as having solved the symbol grounding problem for itself then I would say it has.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>I wouldn't say that it <b><i>solved</i></b> the symbol grounding problem. It would be more accurate to say it demonstrates that it has <b><i>overcome</i></b> the symbol grounding problem. It shows that it has grounded the meaning of English words down to objective mathematical structures (which is about as far down as anything can be grounded to). So it is no longer trading symbols for symbols, it is converting symbols into objective mathematical structures (such as connected graphs).</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>My thought at the time was that you must not have the knowledge to understand the problem, and so I let it go, but I've since learned that you are very intelligent and very knowledgeable. I am wondering how you could make what appears, at least to me, an obvious mistake.</div></div></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> Perhaps you can tell me why you think I am mistaken to say you are mistaken.<br><br></div></div></div></blockquote><div><br></div><div>My mistake is not obvious to me. If it is obvious to you, can you please point it out?</div></div></div></blockquote><div><br></div><div><br>We know that like words in the English language which have referents from which they derive their meanings, symbols in the language of mathematics must also have referents from which they derive their meanings. Yes? </div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Yes.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>We know for example that "four" and "4" and "IV" have the same meaning. The symbols differ but they have the same meaning as they point to the same referent. So then the symbol grounding problem for words is essentially the same as the symbol grounding problems for numbers and mathematical expressions.<br></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Yes.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br>In our discussion, you seemed to agree that an LLM cannot solve the symbol grounding problem for itself.</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I don't recall saying that. I am not sure what that phrase means.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> but you felt that because it can translate English language about spatial relationships into their equivalents in the language of mathematics, that it could solve for mathematics would it could not solve for English.</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">That's not quite my point. My reason for using the example of a mathematical structure (the graph it built in it's mind) is because no translation is needed, the meaning of this structure, (a shape and connected graph), is-self descriptive and self-evident, it's not just converting some symbols into other symbols, it's converting English symbols into an objective mathematical form which doesn't need to be translates or interpreted.</div><div dir="auto"><br></div><div dir="auto">It's that that GPT has solved symbol grounding for math and not English, but that it has solved it for English *as evidenced* by this demonstration of connecting words to an objective structure which we can all see.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> That made no sense to me. That GPT can translate the symbols of one language into the symbols of another is not evidence that it has grounded the symbols of either.<br></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Right, I would accept that Google translate need not understand the meaning of words to do what it does. But that's not what's happening in my example.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br>GPT-4 says it cannot solve the symbol grounding problem for itself as it has no subjective experience of consciousness (the title of this thread!)<br></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I put more weight on what GPT can demonstrate to us than what it says of its abilities.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br>However, you clarified above that...<br><br>> It would be more accurate to say it demonstrates that it has <b><i>overcome</i></b> the symbol grounding problem.<br><br>Okay, I can agree with that. It has "overcome" the symbol grounding problem for the language of mathematics without solving it in the same way that it has overcome the symbol grounding problem for English without solving it. It overcomes these problems with powerful statistical analysis of the patterns and rules of formal mathematics with no understandings of the meanings.<br></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">You presume there's something more to meaning than that. </div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br>As with English words, to understand the meanings of mathematical symbols, I think an LMM would need to have access to the referents which it does not.</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">It has indirect access, just like we do.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> In our discussion, I mentioned how I agree with mathematical platonists. I think that is how humans solve the symbol grounding problem for mathematics. We can "see" the truths of mathematical truths in our minds distinct from their expressions in the formal rules of mathematics. We see them in the so-called platonic realm.</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">This shows it's possible to develop understanding without direct sensory familiarity with referents. And if it can work for objects in math, why not objects in physics?</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> <br><br>Perhaps the platonists have it a bit wrong and Kant had it right with his similar idea that "Quantity" is one of Categories of the Understanding, inherent in human minds. Bertrand Russell and Gottlieb Frege and others were unhappy with both Plato and Kant and tried to understand the referents of mathematics in terms of set theory. That project mostly crashed and burned when Russell discovered what is called Russell's Paradox.<br></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Set theory was corrected a few years later and it remains the current foundation of our mathematics. But I think axiomatic systems play little role in mathematical intuition of mathematicians. I think that is better explained by an understanding developed through experience (e.g. a type of empiricism) from playing with concepts and familiarizing oneself with the patterns.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br>However we do it, I think it is clear that we do understand the meanings of numbers distinct from their forms. We know that "four" and "4" and "IV" are formal representations of the same number and that formal representations are not the same as the numbers they represent, and we somehow know or sense the underlying meanings of these forms. In other words, we can solve the grounding problem for ourselves.<br><br>Just as GPT-4 says it cannot solve the symbol grounding for itself in general, it also has this to say about the problem with respect to mathematics:<br><br>Q: Can a large language model solve the symbol grounding problem for mathematics for itself?<br><br>A: As an AI language model, my primary function is to understand and generate human-like text based on the input I receive. I can help with mathematical concepts, solve math problems, and answer questions related to mathematics to the best of my training.<br><br>However, the grounding problem in mathematics refers to the philosophical question of how abstract mathematical concepts are rooted in or connected to reality. This problem is more related to the philosophy of mathematics and epistemology than to the computational capabilities of a language model like me.<br></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I don't think it's very mysterious, it just comes from simulation (mental or otherwise). The same way we learn about the objects in the Game of Life universe. Simulation allows us to peer into other universes and learn their properties. There's an infinity of possible objects we can explore and learn about in this way.</div><div dir="auto"><br></div><div dir="auto">Jason </div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br>While I can provide insights, explanations, and discussions on the topic, I cannot "solve" the grounding problem for mathematics myself. This is because solving the grounding problem would require a deeper understanding of the connection between abstract mathematical concepts and the physical world, which is beyond the scope of my text-based training and current capabilities.<br>-GPT-4<br>---<br><br>Needless to say, GPT-4's answer makes perfect sense to me.<br><br>-gts<br><br> </div></div></div>
</blockquote></div></div></div>