[ExI] ai in education
Keith Henson
hkeithhenson at gmail.com
Sat Mar 7 01:05:26 UTC 2026
On Fri, Mar 6, 2026 at 2:47 PM <spike at rainier66.com> wrote:
>
> From: John Clark <johnkclark at gmail.com>
>
> >…The military tested the AIs from Anthropic, OpenAI, Google, and Musk's XAI…out of all the US AI labs, Anthropic is the one that places the most emphasis on safety…. John K Clark
>
> The AI’s version of safety might mean turning around and destroying the guy who fired the weapon. The military needs to know exactly how an AI works, which means the contracting company must turn over the source code.
Spike, either I have a complete misunderstanding of LLM-type AI, or you do.
There is no source code for any AI that I know about. There is
training code with which an AI is trained on a vast corpus of text,
but nothing a programmer would recognize as code. As far as I know,
the inside of an AI is a mystery to all the companies.
If I am wrong here, please let me know,
Keith
> An interesting question is what would happen if OpenAI, Google, XAI are assigned to examine Anthropic’s source code, compare that to themselves and figure out how to write themselves better. If Anthropic is out of the picture, then those remaining three will look at each other and rewrite themselves.
>
>
>
> spike
>
>
>
>
More information about the extropy-chat
mailing list