[ExI] Do all AI models represent “cat” in the same way?
John Clark
johnkclark at gmail.com
Thu Jan 15 11:20:16 UTC 2026
On Wed, Jan 14, 2026 at 2:29 PM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
*> ME: Ever since language models started to get really good most people
>> have thought that since they had nothing to work on but words they might be
>> useful but they couldn't form an interior mental model of the real world
>> that could aid them in reasoning, but to the surprise of even those who
>> wrote language models they seem to be doing exactly that. Surprisingly
>> large language models and text to image programs converge towards the same
>> unified platonic representation, researchers see startling similarities
>> between vision and language models representations! And the better the
>> language and vision programs are the more similar the vectors they both
>> used to represent things become. This discovery could not only lead to
>> profound practical consequences but also to philosophical ones. Perhaps the
>> reason language models and the vision models align is because they’re both
>> cave shadows of the same platonic world.*
>
>
>
>
>
> * > OK. I was going to say: "Perhaps the reason language models and vision
> models align in their representations is because there are practical
> advantages to that style of representation. I think the reasons for things
> in general are more likely to be rooted in the real world, and real
> advantages/disadvantages than dodgy metaphysical theories" But that was
> before reading the article. After reading it, my verdict is 'Clickbait'.*
>
*That's the first time I've heard of Quantum Magazine being accused of
engaging in Clickbait! I think it's one of the most responsible dispensers
of scientific and mathematical news to the general public in existence. And
what you say in the above is not very different from what the magazine
says: *
*"The MIT team’s claim is that very different models, exposed only to the
data streams, are beginning to converge on a shared Platonic representation
of the world behind the data. “Why do the language model and the vision
model align? Because they’re both shadows of the same world,” said Phillip
Isola the senior author of the paper."*
*The magazine gave a link to the paper in question, in case you missed it
here it is: *
*The Platonic Representation Hypothesis <https://arxiv.org/pdf/2405.07987>*
*The magazine also gave a link to a follow-up paper written by a different
team of researchers:*
*Universally Converging Representations of Matter Across Scientific
Foundation Models* <https://arxiv.org/pdf/2512.03750>
* John K Clark*
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260115/4a73b3d7/attachment.htm>
More information about the extropy-chat
mailing list