[ExI] Do all AI models represent “cat” in the same way?
John Clark
johnkclark at gmail.com
Fri Jan 16 12:55:28 UTC 2026
On Fri, Jan 16, 2026 at 6:13 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
*> It's interesting but to me not that surprising, when you consider all
> these AI companies are using the same data sets and the same fundamental
> algorithms for training the networks: same function + same input -> same
> output *
>
*But in this case two different algorithms were used, and their inputs were
different, and the differences were not just in details, they represented
entirely different types of things. One AI had access to nothing but a
datastream consisting of just words, and the other had access to nothing
but a datastream consisting of nothing but pictures, and yet the resulting
neural net arrangement of the two things were similar.*
*Regardless of if they are words or pictures a**ll AIs represent concepts **as
high dimensional vectors. So an AI that has never seen anything except
words can be compared with an AI that has never seen anything but pictures.
And if the direction in multi dimensional idea space for the word "cat" is
pointing in a specific direction (relative to other words ) that is similar
to the direction in multi dimensional idea space that a picture of a cat is
pointing to (relative to other pictures) then they must have something in
common. And the only thing that could be is that both the pictures and the
words came from the same external reality; Plato suggested that 2500 years
ago, but this is the first time we've had experimental confirmation that he
was right. *
*> This is just another case of: same function + same input -> same output *
*No. Read the article again. This is a case of: different
function + different TYPE of input resulting in the same output, they are
both shadows of the same world. And if that wasn't what Plato was talking
about then what was he talking about? *
*> Some people have language generating capacities in one hemisphere vs.
> the other, in fact left handed people are more likely to have language
> capacities in their right hemisphere rather than their left.*
*That's true but I don't find it particularly interesting. But what I do
find interesting is that people like Helen Keller can form a coherent
picture of the outside world that is very similar to yours or mine even
though since birth she could not hear or see; all she had access to are
words communicated with a form of fingerspelling. Apparently the linguist
who said “You shall know a word by the company it keeps.” was correct, and
he could've said the same thing about pictures. *
*John K Clark*
*Ever since language models started to get really good most people have
>> thought that since they had nothing to work on but words they might be
>> useful but they couldn't form an interior mental model of the real world
>> that could aid them in reasoning, but to the surprise of even those who
>> wrote language models they seem to be doing exactly that. Surprisingly
>> large language models and text to image programs converge towards the same
>> unified platonic representation, researchers see startling similarities
>> between vision and language models representations! And the better the
>> language and vision programs are the more similar the vectors they both
>> used to represent things become.** This discovery could not only lead to
>> profound practical consequences but also to philosophical ones. Perhaps the
>> reason **language models and the vision models align is because they’re
>> both cave shadows of the same platonic world.*
>>
>> *Distinct AI Models Seem To Converge On How They Encode Reality*
>> <https://www.quantamagazine.org/distinct-ai-models-seem-to-converge-on-how-they-encode-reality-20260107/?mc_cid=4af663cb22&mc_eid=1b0caa9e8c>
>>
>> *John K Clark*
>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260116/e161ea8d/attachment.htm>
More information about the extropy-chat
mailing list