[ExI] OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash
BillK
pharos at gmail.com
Sat Feb 28 23:16:08 UTC 2026
On Sat, 28 Feb 2026 at 22:42, spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
> There is no going back. If the bad guy is developing AI weapons, then AI
> weapons are required for defense against them. It is race to the
> technology or die. The Chinese get it.
> This isn't that new really. The navy has had loitering
> target-identification torpedoes for some time.
>
> Regarding what the pentagon is doing with AI, it is perfectly reasonable
> to strictly control what goes into the AI's training. The generals must
> know the AI-enabled weapon they carry into battle will not turn around and
> kill the soldiers who fired it.
>
> spike
> _______________________________________________
>
I wonder whether Hegseth has considered asking an AI about this policy?
I just did, and Perplexity AI described it as a race to the bottom.
BillK
Perplexity AI -
The policy you describe would push military AI toward maximized capability
and speed at the expense of safety, civil liberties, and long‑term
strategic stability. If other major powers follow the same path, it risks a
global “race to the bottom” on AI guardrails, especially in surveillance
and weapons systems.
What the Pentagon is pushing for
Recent reporting describes U.S. defense officials pressuring Anthropic to
allow their models to be used for any “lawful” purpose, explicitly
including AI‑directed weapons and domestic surveillance, and threatening to
end a large contract if the company refuses. Anthropic, by contrast, has
drawn firm lines against mass domestic surveillance and fully autonomous
weapons, even while supporting many other national‑security uses like
intelligence analysis and cyber defense.
The Defense Department’s stance is that internal corporate safety limits
are “obstacles” if the underlying use is legal, and that the military must
retain full control over how AI is deployed in war and intelligence
operations. This pits operational flexibility and secrecy against
vendor‑imposed ethical and legal constraints.
Domestic implications: surveillance, rights, and power
If “no guardrails” effectively means allowing broad AI‑enabled surveillance
so long as some legal justification can be found, several consequences
follow:
-
*Mass domestic surveillance at scale*: AI can aggregate data from email,
browsing history, movements, and other purchased or intercepted data to
build detailed profiles of citizens, turning fragmented, controversial
practices into a coherent, far more invasive capability. This is materially
different from human‑scale monitoring because of its reach, persistence,
and ability to infer sensitive attributes.
-
Weakening of civil liberties: Current law already lags behind
technology, and many AI‑enabled surveillance tactics are not clearly
illegal even if they undermine privacy and freedom of association. Once
such tools exist and are normalized, they are hard to roll back and can
chill protests, journalism, and political opposition.
-
Reduced accountability: If models are used for watchlists, risk scores,
or “pattern of life” analysis without strict constraints and audit trails,
their outputs can become de facto targeting or profiling decisions that are
opaque and difficult to challenge. Errors or bias then have real‑world
consequences but no clear line of responsibility.
In short, removing guardrails in this domain risks moving liberal
democracies toward the kind of pervasive, data‑driven social control they
often criticize in authoritarian systems.
Battlefield and escalation risks
On the military side, the key danger is sliding from AI “decision support”
into de facto or full autonomy in lethal force:
-
From support to de facto control: AI systems already aggregate sensor
data, predict enemy actions, and support targeting decisions; without
guardrails and human‑in‑the‑loop requirements, commanders may come to rely
on recommendations that function as effective orders. Under time pressure,
the human “check” can become ceremonial.
-
Fully autonomous weapons: Anthropic and many legal scholars explicitly
warn that weapons that identify, select, and engage targets without
meaningful human control pose serious ethical and legal problems, even if
not yet clearly banned. They complicate compliance with distinction and
proportionality in the laws of war, and make attribution and accountability
for war crimes harder.
-
Escalation and accident risk: High‑speed, opaque systems interacting in
contested environments (e.g., drone swarms, automated cyber responses)
increase the chance of misinterpretation, unintended engagements, and rapid
escalation before humans can intervene. Once multiple states adopt such
systems, crisis stability becomes more fragile.
A “no guardrails” posture thus prioritizes short‑term tactical advantage
over long‑term control of escalation and adherence to international
humanitarian law.
International race dynamics: China, Russia, and others
China and Russia already view AI as central to military modernization,
information dominance, and intelligence, surveillance, and reconnaissance
(ISR). China leverages its vast data resources and civil‑military fusion to
build powerful surveillance and command systems; Russia emphasizes
autonomous systems, robotics, and AI‑assisted reconnaissance and targeting.
If the U.S. effectively declares that operational needs trump safety
guardrails, several dynamics are likely:
-
Race to the bottom: Each side can point to others’ aggressive use of AI
to justify loosening its own constraints, arguing that restraint would be a
unilateral disadvantage. This discourages meaningful human‑control
requirements or surveillance limits.
-
Norms and governance erosion: Efforts at international regulation of
lethal autonomous weapons and surveillance norms rely on leading powers
showing some willingness to accept constraints. If the U.S. insists on
unconstrained lawful use, it weakens its ability to argue against similarly
unconstrained systems deployed by China, Russia, or others.
-
Export and proliferation: Once unrestricted military AI systems exist,
they will diffuse via arms sales, proxies, and commercial adaptation,
making powerful surveillance and targeting tools available to less
accountable regimes. That amplifies repression and instability globally.
So the policy is not just about U.S. capability; it helps set the global
standard for what is seen as acceptable military and security use of AI.
Governance and policy choices
Analysts describe the current clash over guardrails as a governance
“inflection point” for how frontier AI will be integrated into defense and
intelligence institutions. Several policy choices are in play:
-
Who sets the limits: One model gives governments near‑total discretion,
with companies treated as arms‑length vendors; another allows developers to
embed legal and ethical restrictions into their products, even for state
clients. The outcome will shape procurement standards and the balance of
power between states and major AI firms.
-
Embedding accountability: Strong guardrails would include clear scope
limitations, logging and auditability, mandatory human authorization for
high‑risk uses, and mechanisms to trace specific decisions back to
responsible commands. A “no guardrails” approach tends to undercut these
structures in favor of operational freedom.
-
Aligning law with capability: Much of what is technically possible is
“lawful” simply because legal frameworks have not yet caught up. Without
proactive regulation—on surveillance thresholds, data use, and autonomous
weapons—“lawful use” becomes a very weak safeguard.
In effect, the Pentagon’s position asserts that legality plus internal
military processes are enough, while critics argue that for systems this
powerful and opaque, *ex ante* technical and contractual constraints are
essential complements to law and doctrine.
------------------------------
Taken together, a push for AI “without guardrails” in surveillance and
military contexts risks entrenching mass monitoring, eroding civil
liberties, increasing the chance of accidental or unlawful use of force,
and driving a global competition toward ever less constrained systems. A
more cautious approach would insist on meaningful human control, narrow and
auditable uses, and international norms that apply equally to the U.S.,
China, Russia, and others.
-----------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260228/00a97000/attachment.htm>
More information about the extropy-chat
mailing list