[ExI] OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash

Adrian Tymes atymes at gmail.com
Tue Mar 3 15:21:13 UTC 2026


On Tue, Mar 3, 2026 at 9:21 AM spike jones via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
> I agree any AI going into a weapon cannot have guard rails controlled by a company outside the military.  That is a perfectly objective measure.  Do you agree with it?

This is a multifaceted problem.  The following all appear to be true.

1) Without guard rails, it is likely that an AI-driven autonomous
weapon would inflict unacceptable amounts of collateral damage and/or
friendly fire.  Therefore, employing AI-driven autonomous weapons
without guard rails is likely to backfire on any military that does
this, no matter how much certain people who may be in charge of this
military - but either are unaware of, or improperly dismiss, the
likely consequences - say they want it.

2) The military does not appear to have the capability to develop
reliable guard rails.  This capability appears to solely be possessed
by certain private companies.  It is questionable whether even those
companies truly possess said capability, but the relevant point is
that it appears to definitely be the case that no one else does.

3) Most of said companies would rather not make AI-driven autonomous
weapons, than be responsible for the necessary guard rails.  (Their
reasons may include moral objections to being involved in killing at
all, pragmatic objections from believing they can not make
well-functioning guard rails and fearing for their liability when not
if things go wrong, or other things - but whatever they are, only
their existence matters for this problem.)

4) Said guard rails, if they exist at all, will inherently be under
the control of their creators.  Attempts may be made to delegate this
control, e.g. to the military, but - at least in the near term - such
delegation can not be anywhere near complete.

This appears to mean that either you have guard rails under control of
the few AI companies willing to develop them, or you force AI weapons
on the military and try to accept the resulting friendly fire
incidents - which will lead to such weapons being distrusted by those
in the field, which will lead to greater misuse, all while the
excessive collateral damage incidents keep piling up.

In other words: if AI going into a weapon does not have guard rails
controlled by a company outside the military, given the practical
realities today, that will do damage to said military's mission.
Whether this damage exceeds the benefits can be debated, but
attempting to ignore it will - given historical examples of what
happens when the military (or any government organization) actively
tries to ignore damage inflicted by a new technology - backfire.

The morality can (and should be) debated, but I don't think there is
much disagreement on "doing this thing will predictably harm the
military, relative to its current situation without reference to
as-yet-unrealized potential value, and benefit no one" being an
overall negative consequence to be avoided.  (I add that "relative"
clause to avoid digressions about "...but you're denying the military
this shiny new capability" that ignore costs that would outweigh the
benefits.)

> Military people do not and cannot publish papers.  They publish inside a closed system that is closed for a reason.  They don’t necessarily invent things themselves, but they collect information from those who do.  They hang out at scientific conferences, doing far more listening than talking.  They watch and listen.  They are educated inside that closed system.  You don’t hear their names because they are not motivated by fame.  The top scientists don’t have a monopoly on smart.

As someone with much more recent and direct experience with this sort
of thing, I offer this observation: military people do and can publish
papers, on unclassified elements of what they are working on.  This
happens all the time.

John is correct to assert that, if the military did have a force of AI
developers equivalent to what the open commercial world has been
deploying, it would not be hidden.  Whether one calls it "impossible"
or just "impractical", we would see evidence that we do not see.



More information about the extropy-chat mailing list