<div dir="ltr"><div>Gordon,</div>What we forget is the Model in LLM. <br>It doesn't matter (up to a point) what they trained GPT-4 on. Language is a good thing to train on given its richness of content, the incredible relations between the different words and concepts. There is a lot of structure and regularities in language. That was the input. The output was the weights of the ANN that are supposed to be an acceptable solution to understand language as judged by a human observer (this is where the Reinforced Supervised Learning component came into place). Now feed any other input, some request in a given language (GPT-4 knows many) and GPT-4 output is supposed to be a contextual coherent, informed, and aware (yes aware of the context for sure) piece of conversation. <br>This was achieved not just using stats (even if that was the starting point) but a MODEL of how language works. The model is what counts !!!<br>Why a model? Because it is impossible combinatorically to take in account of all the possible combinations a word comes with, and it is not just a word but a cluster of 2, 3 or even several words (not sure what is the limit that is considered but it is up to many words). So to address the issue of combinatorial explosion a model of the world (as you said language is the entire universe for a LLM) had to be created. It is not a model the programmer put in but the LLM created this model by the recursive training (based on just adjusting the weight in the ANN) it received. <br>This model is a model of an entire universe. It is pretty universal it seems because it can also work to solve problems also somehow related but not directly related to language. It is not super good in solving math problems (probably because more specific training in math is needed) but it does a decent job with the right prompts (like checking order of operation for example), it can resolve problems related to the theory of mind (that is somehow there in understanding language but not exactly), it can understand spatial relationships and so on. All this is because there is a MODEL of the universe inside GPT-4. <br>The MODEL is what counts. <br>Do you understand how different this is from what you thnk a LLM does? <br>Giovanni <br><br><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Apr 17, 2023 at 12:58 PM Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 17/04/2023 20:22, Gordon Swobe wrote:<br>
> Let us say that the diagram above with a "myriad of other concepts <br>
> etc" can accurately model the brain/mind/body with links extending to <br>
> sensory organs and so on. Fine. I can agree with that at least <br>
> temporarily for the sake of argument, but it is beside the point. <br>
<br>
Why are you saying it's beside the point? It is exactly the point. If <br>
you can agree with that simplified diagram, good, so now, in terms of <br>
that diagram, or extending it any way you like, how do we show what <br>
'grounding' is? I suppose that's what I want, a graphical representation <br>
of what you mean by 'grounding', incorporating these links.<br>
<br>
Never mind LMMs, for the moment, I just want an understanding of this <br>
'grounding' concept, as it applies to a human mind, in terms of the <br>
brain's functioning. Preferably in a nice, simplified diagram similar to <br>
mine.<br>
<br>
Ben<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>