<html class="apple-mail-supports-explicit-dark-mode"><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div dir="ltr"></div><div dir="ltr">The use of the term Platonic seems to be more confusing than helpful. They converge on the common tangle, real data they are given because that data was gathered by humans converging on a real, physical world. Maybe that’s what they meant but when I think of Platonic Forma it’s usually saying that realist is reflective of some “higher, spiritual” realm. That’s not at all what’s described here.</div><div dir="ltr"><br></div><div dir="ltr">I think it will be highly interesting when the AIs are in cars, robots, androids and animadroids (animal like droids) and can gather multiple modes of their own sensory data, with the pretrained LLM type matrixes giving them a head start to understand and sort data, the way our evolved brains help us quickly sort and understand data. </div><div dir="ltr"><br></div><div dir="ltr">Tara Maya</div><div dir="ltr"><br><blockquote type="cite">On Feb 9, 2026, at 10:16, Stefano Ticozzi via extropy-chat <extropy-chat@lists.extropy.org> wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr"><div dir="auto"><div dir="auto">Scientific thought has long since moved beyond Platonism, demonstrating that:</div><div dir="auto">1. Ideas do not exist independently of the human mind. Rather, they are constructs we develop to optimize and structure our thinking.</div><div dir="auto">2. Ideas are neither fixed, immutable, nor perfect; they evolve over time, as does the world in which we live—in a Darwinian sense. For instance, the concept of a sheep held by a human prior to the agricultural era would have differed significantly from that held by a modern individual.</div><div dir="auto"><br></div><div dir="auto">In my view, the convergence of AI “ideas” (i.e., language and visual models) is more plausibly explained by a process of continuous self-optimization, performed by systems that are trained on datasets and information which are, at least to a considerable extent, shared across models.</div><div dir="auto"><br></div><div dir="auto">Ciao,</div><div dir="auto">Stefano</div><br><div class="gmail_quote gmail_quote_container" dir="auto"><div dir="ltr" class="gmail_attr">Il sab 7 feb 2026, 12:57 John Clark via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> ha scritto:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><a href="https://www.quantamagazine.org/distinct-ai-models-seem-to-converge-on-how-they-encode-reality-20260107/?mc_cid=b288d90ab2&mc_eid=1b0caa9e8c" style="font-family:Arial,Helvetica,sans-serif" target="_blank" rel="noreferrer"><b><font size="4" face="tahoma, sans-serif">Why do the language model and the vision model align? Because they’re both shadows of the same world</font></b></a></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><font size="4"><i>The following quote is from the above: </i></font></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default"><font size="4" face="tahoma, sans-serif"><b><span style="color:rgb(26,26,26)">"More powerful AI models seem to have more similarities in their representations than weaker ones. S</span><span style="color:rgb(26,26,26)">uccessful AI models are all alike, and every unsuccessful model is unsuccessful in its own particular way.[...] </span><span style="color:rgb(26,26,26)">He would feed the pictures into the vision models and the captions into the language models, and then compare clusters of vectors in the two types. He observed a steady increase in representational similarity as models became more powerful. It was exactly what the Platonic representation hypothesis predicted."</span></b></font></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="color:rgb(26,26,26);font-family:Merriweather,Georgia,serif;font-size:16px"><br></span></div><div class="gmail_default"><font color="#1a1a1a" face="arial, sans-serif" size="4"><i>In my opinion the above finding has profound philosophical implications. </i></font><span style="font-family:Merriweather,Georgia,serif;color:rgb(26,26,26);font-size:16px"><br></span></div><div class="gmail_default"><font color="#1a1a1a" face="arial, sans-serif" size="4"><i><br></i></font></div><div class="gmail_default"><b style="color:rgb(80,0,80)"><font face="tahoma, sans-serif"><font size="4">John K Clark See what's on my new list at </font><font size="6"><a href="https://groups.google.com/g/extropolis" rel="nofollow noreferrer" target="_blank">Extropolis</a></font></font></b></div><font size="1" color="#ffffff">qjq</font></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div>
<span>_______________________________________________</span><br><span>extropy-chat mailing list</span><br><span>extropy-chat@lists.extropy.org</span><br><span>http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</span><br></div></blockquote></body></html>