<div dir="ltr">Gordon,<br>This why I put in the form of a question how do we know that our way of using language doesn't use perceived patterns and regularities (that are in the end statistical) in language? <br>Also, you never answered the question if you understand that the LLMs make models right? It is not just this word comes after this with this probability so I'm going to use this word as the most likely. They don't do that. They used that as the starting point but they needed to create a model (via the training and so the adjusting of the weights in the ANN) that was predictive of how language works. <br>The main question is: Is it possible that the final results, that is the trained ANN actually spontaneously figure out language and therefore meaning to actually achieve this level of accuracy and mastery of language? <br>Think about this<br>1) Language is not really that meaningful in fact we can use stats to determine what is the more likely word that follows another and this will give very meaningful text, so we are all stochastic parrots<br>2) Language is so complex and full of intrinsic meaning that to really be able to use it you need to understand it. You will never be able to use just stats to create meaningful context. <br><br>I cannot prove it 100 % but basically, I adhere to the second camp (like Brent would call it). I think that they used stats as starting point and because it was impossible to handle the combinatorial explosion they started to train the system by exposing it to more and more text and increasing the number of connections in the net. Eventually the net figure out language because the net is able to discriminate and select meaningful responses EXACTLY how the net in our brain works. <br><br>There are many examples of similar processes in nature where initially there was so particular goal set by evolution but as a side effect, something else evolved. Take the human ability to do abstract thinking. It evolved to do some simple tasks like maybe simple planning for a hunt or maybe understanding social structure but then this ability evolved where now we can send probes to Mars. <br>It is not obvious you can get this capability by evolving good hunters. <br>This is how emergence works, it is unexpected and unpredictable from the original components of the system that gave rise to that complexity.<br><br>Giovanni <br><br><br><br><br><br><br><br><br><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 26, 2023 at 11:49 PM Gordon Swobe <<a href="mailto:gordon.swobe@gmail.com">gordon.swobe@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Apr 27, 2023 at 12:20 AM Giovanni Santostasi via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><b>GPT "understands" words only in so much it understands how how they fit into patterns, statistically and mathematically in relation to other words in the corpus on which it is trained, which is what it appears to be saying here<br></b><br>1) How do you know humans do not the same <a href="https://www.fil.ion.ucl.ac.uk/bayesian-brain/#:~:text=The%20Bayesian%20brain%20considers%20the,the%20basis%20of%20past%20experience" target="_blank">https://www.fil.ion.ucl.ac.uk/bayesian-brain/#:~:text=The%20Bayesian%20brain%20considers%20the,the%20basis%20of%20past%20experience</a>.</div></blockquote><div><br>This is an article about Bayesian reasoning, a theory that we use Bayesian reasoning innately to estimate the likelihood of hypotheses. It's all very interesting but nowhere does it even hint at the idea that humans are like language models, not knowing the meanings of words but only how they relate to one another statistically and in patterns. We know the meanings of the words in our hypotheses, for example, spoken or unspoken.</div><div><br>-gts<br></div><div><br></div></div></div>
</blockquote></div>