[ExI] AI controlled weapons now being used in Ukraine
BillK
pharos at gmail.com
Fri Sep 12 15:47:47 UTC 2025
On Fri, 12 Sept 2025 at 11:54, BillK <pharos at gmail.com> wrote:
> Terminators: AI-driven robot war machines on the march
> Science fiction? Battle bots already used in Ukraine.
> Steven J. Vaughan-Nichols Fri 12 Sep 2025
>
> <
> https://www.theregister.com/2025/09/12/terminators_aidriven_robot_war_machines/
> >
> ------------------------------
I asked CHATGPT-5 to explain more about the dangers of AI-controlled
weapons.
The answer was rather frightening.
BillK
CHATGPT-5:
Alright, let’s slow it down and walk through both sets of risks — the ones *The
Register* article explicitly pointed out, and the extra ones I added that
sit just under the surface.
------------------------------
*Risks the article itself highlighted*
1.
*Misidentification of targets*
-
AI systems rely on sensors and pattern recognition. In the chaos of
battle (smoke, debris, unusual angles), they may confuse a
civilian car for
a military one, or mistake surrendering soldiers for combatants.
-
Once that happens, the system can act faster than a human can
intervene.
2.
*Delegation drift (“humans in theory, machines in practice”)*
-
Militaries insist on keeping humans in the loop, but in high-speed
engagements, operators often defer to AI judgments because reaction times
are too short.
-
This creates a *slippery slope*: formally, there’s oversight; in
practice, the machine is calling the shot.
3.
*Encoding of human bias*
-
AI is trained or parameterized using human input. If biases are built
in (for example, labeling certain groups as “suspicious”), the AI can
reproduce or even amplify those prejudices.
-
Unlike a human officer, the AI can’t contextualize or show discretion
once its rules are set.
4.
*Civilian harm*
-
Even small errors in target selection can have massive humanitarian
impact. Drone swarms or autonomous munitions don’t “second-guess” — they
act.
-
This risk is especially acute in urban battlefields where fighters
and civilians are mixed.
5.
*Escalation risk*
-
The article mentions war-gaming studies where AI systems tend to
escalate conflicts more aggressively than humans would.
-
Faster machine decision cycles can lead to tit-for-tat responses
spiraling before humans even realize what’s happening.
------------------------------
*Additional risks that weren’t fully unpacked*
1.
*Error cascades*
-
One bad decision by an autonomous system can ripple outward. Example:
a drone misidentifies a truck, destroys it, cutting off communication.
Other systems interpret the blackout as enemy jamming, escalating further.
-
Machines chained together can amplify mistakes faster than humans can
step in.
2.
*Adversarial misuse / hacking*
-
AI systems can be tricked (adversarial attacks) or hijacked. Imagine
painting symbols on a roof that fool a drone’s vision into
thinking it’s a
friendly target, or jamming GPS so it mis-navigates.
-
If hacked, the weapon can literally be turned against its owner.
3.
*Arms race dynamics*
-
Nations may feel pressured to *out-automate* rivals. The faster one
side delegates control to machines, the more others will follow — not
wanting to be left behind.
-
This speeds up the adoption of immature, poorly tested systems,
raising accident risks.
4.
*Lower threshold for conflict*
-
Cheap, autonomous weapons (like drone swarms) reduce the political
cost of war — fewer soldiers at risk. That makes it easier for leaders to
say “yes” to escalation.
-
The psychological barrier to launching attacks gets thinner.
5.
*Accountability vacuum*
-
If an autonomous system commits a war crime (say, bombing civilians),
who’s responsible?
-
The commander who deployed it?
-
The coder who wrote the targeting software?
-
The company that manufactured it?
-
Right now, international law has no clear answers, which risks a
culture of impunity.
6.
*Norm erosion*
-
Even if current doctrine insists on human oversight, repeated use of
semi-autonomous systems normalizes pushing the boundaries. Over time, the
“line” of acceptable use moves quietly forward until full autonomy is the
de facto norm.
------------------------------
*Why these matter together*
The picture isn’t just “AI might make mistakes.” It’s a *systemic risk
stack*:
-
AI speeds up decisions.
-
Humans struggle to stay in the loop.
-
Errors and biases scale up faster.
-
Opponents exploit weaknesses.
-
Political leaders face fewer costs to escalate.
-
Meanwhile, no one’s clearly accountable when it all goes wrong.
That’s the cocktail that makes AI-driven weapons scarier than just “smarter
missiles.
*------------------------------------------------------*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250912/b5cc0b9f/attachment.htm>
More information about the extropy-chat
mailing list