[ExI] Do all AI models represent “cat” in the same way?
Adrian Tymes
atymes at gmail.com
Sat Jan 17 03:00:43 UTC 2026
On Fri, Jan 16, 2026 at 7:57 AM John Clark via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
> And if the direction in multi dimensional idea space for the word "cat" is pointing in a specific direction (relative to other words ) that is similar to the direction in multi dimensional idea space that a picture of a cat is pointing to (relative to other pictures) then they must have something in common. And the only thing that could be is that both the pictures and the words came from the same external reality;
Incorrect. There are other possible explanations.
For instance, the creators of both sets of training data may have had
similar cultural inspirations - they "painted", whether with paint or
with words, the same mental image. It could be that the AI is
uncovering this mental model, suggesting that its training data does
not have much representation from creators with substantially
different mental models. That is, as I understand it, the outcome
that some who are concerned with this finding are afraid may be the
case.
More information about the extropy-chat
mailing list