[ExI] Do all AI models represent “cat” in the same way?
John Clark
johnkclark at gmail.com
Tue Jan 13 22:10:42 UTC 2026
*Ever since language models started to get really good most people have
thought that since they had nothing to work on but words they might be
useful but they couldn't form an interior mental model of the real world
that could aid them in reasoning, but to the surprise of even those who
wrote language models they seem to be doing exactly that. Surprisingly
large language models and text to image programs converge towards the same
unified platonic representation, researchers see startling similarities
between vision and language models representations! And the better the
language and vision programs are the more similar the vectors they both
used to represent things become.** This discovery could not only lead to
profound practical consequences but also to philosophical ones. Perhaps the
reason **language models and the vision models align is because they’re
both cave shadows of the same platonic world.*
*Distinct AI Models Seem To Converge On How They Encode Reality*
<https://www.quantamagazine.org/distinct-ai-models-seem-to-converge-on-how-they-encode-reality-20260107/?mc_cid=4af663cb22&mc_eid=1b0caa9e8c>
*John K Clark*
”
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260113/b9da90ff/attachment.htm>
More information about the extropy-chat
mailing list