<div dir="auto">It's interesting but to me not that surprising, when you consider all these AI companies are using the same data sets and the same fundamental algorithms for training the networks:<div dir="auto"><br></div><div dir="auto">same function + same input -> same output </div><div dir="auto"><br></div><div dir="auto">Now between the AI companies, nothing is exactly the same. But neural networks all converge to optimal representations given more and more training, just as two students who attend the same classes but at different schools, will tend towards giving the same answers on standardized tests, and the better they study the greater the overlap you can expect between those students on those tests.</div><div dir="auto"><br></div><div dir="auto">I've recently suspected that little of the human brain's finer details are hard coded in our genes, but rather it happens to be that similarities in how different parts of the brain get organized is a result of convergence, given the similarities in the inputs brains receive from the senses.</div><div dir="auto"><br></div><div dir="auto">Note that not everything is the same between our brains. Some people have language generating capacities in one hemisphere vs. the other, in fact left handed people are more likely to have language capacities in their right hemisphere rather than their left. Even handedness might come down to differences in early training/preference that compounds as that hand becomes more adept.</div><div dir="auto"><br></div><div dir="auto">As further evidence, in animal experiments where the optic nerve was reattached to the a different part of the brain, those animals still developed normal vision, so there's nothing special about the visual cortex or it's location in the brain. If our bodies were structured so our optic nerves all connected to a some different place, say the middle of the brain rather than the back, our brain region organizations and layout would be very different, but I suspect they would all be different in similar ways. That is, between those with this modified optic nerve location, they would, I suspect, develop similar topologies for the specialized sub regions within their brains.</div><div dir="auto"><br></div><div dir="auto">This is just another case of:</div><div dir="auto">same function + same input -> same output </div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">Jason </div></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Tue, Jan 13, 2026, 5:12 PM John Clark via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_default"><span style="color:rgb(26,26,26)"><font size="4" face="tahoma, sans-serif"><b>Ever since language models started to get really good most people have thought that since they had nothing to work on but words they might be useful but they couldn't form an interior mental model of the real world that could aid them in reasoning, but to the surprise of even those who wrote language models they seem to be doing exactly that. Surprisingly large language models and text to image programs converge towards the same unified platonic representation, researchers see startling similarities between vision and language models representations! And the better the language and vision programs are the more similar the vectors they both used to represent things become.</b></font></span><span style="color:rgb(26,26,26)"><font size="4" face="tahoma, sans-serif"><b> This discovery could not only lead to profound practical consequences but also to philosophical ones. Perhaps the reason </b></font></span><span style="color:rgb(26,26,26)"><font size="4" face="tahoma, sans-serif"><b>language models and the vision models align is because they’re both cave shadows of the same platonic world.</b></font></span></div></div><div dir="ltr"><br></div><div dir="ltr"><a href="https://www.quantamagazine.org/distinct-ai-models-seem-to-converge-on-how-they-encode-reality-20260107/?mc_cid=4af663cb22&mc_eid=1b0caa9e8c" target="_blank" rel="noreferrer"><font size="4" face="tahoma, sans-serif"><b>Distinct AI Models Seem To Converge On How They Encode Reality</b></font></a><br><div class="gmail_default"><br></div><div class="gmail_default"><font face="tahoma, sans-serif" size="4"><b>John K Clark</b></font></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="color:rgb(26,26,26);font-family:Merriweather,Georgia,serif;font-size:16px"><br></span></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="color:rgb(26,26,26);font-family:Merriweather,Georgia,serif;font-size:16px"><br></span></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="color:rgb(26,26,26);font-family:Merriweather,Georgia,serif;font-size:16px">”</span></div></div>
</div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>