<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 27, 2023, 5:20 PM Gordon Swobe <<a href="mailto:gordon.swobe@gmail.com">gordon.swobe@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><br></div><div>On Mon, Mar 27, 2023 at 3:02 PM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:</div><div dir="auto"><br><div class="gmail_quote" dir="auto"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)"><div dir="auto"><div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)"><div><div class="gmail_quote"><div dir="auto">It certainly gives us that impression, but on careful analysis of what is actually going on, we can see that is the human operator who attributes meaning to those symbols. GPT is merely very good at arranging them in patterns that have meaning to *us*. </div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">I think that's why this particular example is so important to escape that trap, because mathematical structures are objective. Which vertices are connected by which edges isn't something that can be faked or misinterpreted, it simply is.</div></div></blockquote><div dir="auto"><br></div><div dir="auto">I thought I had already mentioned that as Giovanni pointed out correctly, mathematics is a kind of language. The fact that GPT can translate English words about mathematical relationships into the language of mathematics is certainly impressive, but it doesn’t “escape that trap.” </div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Mathematics can be viewed as a language. But it's a language that describes objects that exist in reality. That is to say, math contains "referents." Godel's incompleteness confirms that mathematical structures exist beyond any earthly or human description of them.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto"><div class="gmail_quote" dir="auto"><div dir="auto"><br></div><div dir="auto">When ChatGPT 3.5 first went online, I saw on twitter several examples of how it had failed to make those translations correctly, and I understand GPT-4 is much better at it, but it is still merely manipulating the symbols of English and Math.</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">But please explain how you think it acquired the capacity to interpret the symbols in order to correctly draw *an image of the house", not mere symbols about the house and not a list of mathematical language about the house, but *an accurate picture of the house*.</div><div dir="auto"><br></div><div dir="auto">It seems we're talking past each other at this point so I don't know if any further progress can be made on this subject.</div><div dir="auto"><br></div><div dir="auto">Jason </div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto"><div class="gmail_quote" dir="auto"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)"><div dir="auto"><div dir="auto"></div></div></blockquote></div></div>
</blockquote></div></div></div>