[ExI] Do all AI models represent “cat” in the same way?
Jason Resch
jasonresch at gmail.com
Fri Jan 16 11:11:38 UTC 2026
It's interesting but to me not that surprising, when you consider all these
AI companies are using the same data sets and the same fundamental
algorithms for training the networks:
same function + same input -> same output
Now between the AI companies, nothing is exactly the same. But neural
networks all converge to optimal representations given more and more
training, just as two students who attend the same classes but at different
schools, will tend towards giving the same answers on standardized tests,
and the better they study the greater the overlap you can expect between
those students on those tests.
I've recently suspected that little of the human brain's finer details are
hard coded in our genes, but rather it happens to be that similarities in
how different parts of the brain get organized is a result of convergence,
given the similarities in the inputs brains receive from the senses.
Note that not everything is the same between our brains. Some people have
language generating capacities in one hemisphere vs. the other, in fact
left handed people are more likely to have language capacities in their
right hemisphere rather than their left. Even handedness might come down to
differences in early training/preference that compounds as that hand
becomes more adept.
As further evidence, in animal experiments where the optic nerve was
reattached to the a different part of the brain, those animals still
developed normal vision, so there's nothing special about the visual cortex
or it's location in the brain. If our bodies were structured so our optic
nerves all connected to a some different place, say the middle of the brain
rather than the back, our brain region organizations and layout would be
very different, but I suspect they would all be different in similar ways.
That is, between those with this modified optic nerve location, they would,
I suspect, develop similar topologies for the specialized sub regions
within their brains.
This is just another case of:
same function + same input -> same output
Jason
On Tue, Jan 13, 2026, 5:12 PM John Clark via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> *Ever since language models started to get really good most people have
> thought that since they had nothing to work on but words they might be
> useful but they couldn't form an interior mental model of the real world
> that could aid them in reasoning, but to the surprise of even those who
> wrote language models they seem to be doing exactly that. Surprisingly
> large language models and text to image programs converge towards the same
> unified platonic representation, researchers see startling similarities
> between vision and language models representations! And the better the
> language and vision programs are the more similar the vectors they both
> used to represent things become.** This discovery could not only lead to
> profound practical consequences but also to philosophical ones. Perhaps the
> reason **language models and the vision models align is because they’re
> both cave shadows of the same platonic world.*
>
> *Distinct AI Models Seem To Converge On How They Encode Reality*
> <https://www.quantamagazine.org/distinct-ai-models-seem-to-converge-on-how-they-encode-reality-20260107/?mc_cid=4af663cb22&mc_eid=1b0caa9e8c>
>
> *John K Clark*
>
>
> ”
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260116/fac524b8/attachment.htm>
More information about the extropy-chat
mailing list