[ExI] Do all AI models represent “cat” in the same way?
Ben Zaiboc
benzaiboc at proton.me
Wed Jan 14 19:27:37 UTC 2026
On 14/01/2026 14:35, John K Clark wrote:
>
> Ever since language models started to get really good most people have thought that since they had nothing to work on but words they might be useful but they couldn't form an interior mental model of the real world that could aid them in reasoning, but to the surprise of even those who wrote language models they seem to be doing exactly that. Surprisingly large language models and text to image programs converge towards the same unified platonic representation, researchers see startling similarities between vision and language models representations! And the better the language and vision programs are the more similar the vectors they both used to represent things become. This discovery could not only lead to profound practical consequences but also to philosophical ones. Perhaps the reason language models and the vision models align is because they’re both cave shadows of the same platonic world.
OK. I was going to say:
"Perhaps the reason language models and vision models align in their representations is because there are practical advantages to that style of representation. I think the reasons for things in general are more likely to be rooted in the real world, and real advantages/disadvantages than dodgy metaphysical theories"
But that was before reading the article.
After reading it, my verdict is 'Clickbait'.
--
Ben
More information about the extropy-chat
mailing list