[ExI] Do all AI models represent “cat” in the same way?
Mike Dougherty
msd001 at gmail.com
Sat Jan 17 22:42:55 UTC 2026
On Sat, Jan 17, 2026, 11:04 AM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> JSON is optimized for robustness in certain ways that don't
> necessarily apply in LLM token contexts. For instance, in the latter,
> you may be able to guarantee that newlines will only appear where you
> want them, and they won't be inserted between tokens during
> transmission; not all contexts JSON is used in can make that same
> guarantee. That's why JSON uses more characters to represent the same
> thing.
>
I once raced another developer to build a solution; his used then-hype'd
XML, mine used JSON. I worked slightly harder on the server side... and it
looked like I had fallen behind when he had moved to the client side code
to process the xml sent by his server. As he was debugging the first draft
of that effort, i finished the entire task. For those who don't know,
browsers have native support for JSON - so the second half of my race was
effectively free.
I wonder if AI has arrived at optimization for how humans are using these
ideas. Are WE the commonality? How would we even know if they use
completely different methodology when humans aren't involved? Hmm.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260117/6715f67a/attachment-0001.htm>
More information about the extropy-chat
mailing list