[ExI] AI controlled weapons now being used in Ukraine
spike at rainier66.com
spike at rainier66.com
Fri Sep 12 17:02:19 UTC 2025
From: extropy-chat <extropy-chat-bounces at lists.extropy.org> On Behalf Of BillK via extropy-chat
…
I asked CHATGPT-5 to explain more about the dangers of AI-controlled weapons.
The answer was rather frightening.
BillK
CHATGPT-5:
Alright, let’s slow it down and walk through both sets of risks — the ones The Register article explicitly pointed out, and the extra ones I added that sit just under the surface.
_____
Risks the article itself highlighted
1. Misidentification of targets…
BillK, my son is the software lead for a club at his university where they build flying robots and task them with various things. It flies about the grounds the size of a regional airport (which is where this year’s international was held in the Washington DC area) identifying targets and in some cases taking action. The course had been set up with manakins dressed a certain way to represent such things as fleeing felon, nude sunbather, injured hiker, dangerous wild animal and so on. I might have written about this before, I vaguely recall that I did.
Points could be scored by dropping the appropriate payload on the correct target, such as a food and water capsule on the injured hiker, a fragmenting grenade on the fleeing felon (hey, that guy might be dangerous ya know (that’s how its done over here BillK.))
Well now think about it for a minute or less. We watched as the various teams did their competition runs, and those bots routinely misidentified their targets and did the wrong things. They didn’t have their uniforms, but you know damn well there was military brass out there watching, listening and learning, perhaps scouting for talent to recruit. Hell we were right there in their back yard. The Pentagon was on our side of the river. That’s where I would be if I were in that biz.
How the hell can we even imagine those bots won’t be pressed into service? Doesn’t take a prophet for profit (me) to foresee such things, for it has already happened and is happening right now: the commies and the Ukrainians are both using autonomous drones, and we aren’t told if they are making mistakes because that information is classified up the kazoo.
Fun aside: my son has joined a team doing a related AI competition which I think he is more suited for: they are rigging up an honest-to-evolution open-wheel Indycar, completely autonomous, to race against other schools’ robo-racers (Stanford, the Bears are soooo gonna catch you guys (next year if not this year.)) Well when you think about it, get a car capable of a coupla hundred (over 300 for you BillK) that is a terrific playground for AI. That is a game where you will program out to the limit, and even if there are occasional spectacular crashes, you don’t need to worry if you just paid to see some hapless prole slay himself. No actual humans are harmed in any way in the making of this absurd sport.
Second fun aside: I wondered how a university sports team afford a million dollar play toy like that, well turns out… a worn out or outdated open-wheeler Indy isn’t worth all that much after its technology is stale or the rules change. What are you going to do with it? Make a redneck lawn ornament? (Hey spike, now that you mention it, that would be a delightfully tacky yet refined application…) But even after they are outdated, they are still capable of going a coupla hundred down the back stretch, so why not let the old stallion run, this time with no one on his back? Cool, let’s RACE!
spike
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250912/efef51b7/attachment.htm>
More information about the extropy-chat
mailing list