[ExI] OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash
spike at rainier66.com
spike at rainier66.com
Tue Mar 3 22:44:36 UTC 2026
-----Original Message-----
From: extropy-chat <extropy-chat-bounces at lists.extropy.org> On Behalf Of Adrian Tymes via extropy-chat
Subject: Re: [ExI] OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash
On Tue, Mar 3, 2026 at 9:21 AM spike jones via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> I agree any AI going into a weapon cannot have guard rails controlled by a company outside the military. That is a perfectly objective measure. Do you agree with it?
>...This is a multifaceted problem. The following all appear to be true.
>...1) Without guard rails, it is likely that an AI-driven autonomous weapon would inflict unacceptable amounts of collateral damage and/or friendly fire...
Thanks Adrian, good summary and analysis.
When a company sells a product to the military, the military is buying the source code if it has any (everything has embedded code.) The military can take it apart, change stuff, do whatever it wants. If a company isn't willing to hand over the source code, the military can't fully control it and will not buy it. Nearly every line of code I ever wrote in my career is now the property of the US government. They are free to do whatever they want with the code and the technology that went into it.
Adrian what do you suppose the US Space Force is doing? Are they developing a rocket capable of going into earth orbit? No. They aren't talking about what they are doing in there. But I have some guesses. They watch and listen. The military knew about the risks and promise of AI since way back, and they have their ways of dealing with it. We don't know what they are doing, for they are not advertising it. They are not like the Hollywood's vision, such as Major T.J. Kong.
spike
More information about the extropy-chat
mailing list