[ExI] AI and autonomous warfare

Kelly Anderson postmowoods at gmail.com
Tue Jul 16 16:12:31 UTC 2024


It would sure be nice if we could devise some sort of "Geneva
Conventions" for AI, but we can't even get a stable safety board
inside of a company to make sure they aren't accidentally going to
mess us up. If you can't cooperate for safety inside of a single
company, how much more difficult would it be for nations in conflict
to do the same? I can't imagine what rules would even look like, other
than the ultra simple "there has to be a human in the chain to pull
the trigger", but even that's already been blown through in Ukraine,
by both sides. Besides, humans slow things down and thus ruin your
potential for surprise.

Looking at the recent assassination attempt, with its collateral
damage, it seems that if there were some sort of crowd sourced
information base that was rapid enough, a life could have been saved.
Also, why weren't there drones with AI flying over the area? If we
can't protect a presidential candidate with simply implemented
technology, what hope do we have of getting AI regulations right.

-Kelly

On Tue, Jul 16, 2024 at 8:33 AM BillK via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> On Mon, 15 Jul 2024 at 00:22, BillK <pharos at gmail.com> wrote:
> >
> > This long article discusses the increasing problem of autonomous weapons in warfare.
> > <https://www.theguardian.com/technology/article/2024/jul/14/ais-oppenheimer-moment-autonomous-weapons-enter-the-battlefield>
> > Quote:
> >
> > AI’s ‘Oppenheimer moment’: autonomous weapons enter the battlefield
> > The military use of AI-enabled weapons is growing, and the industry that provides them is booming
> > Nick Robins-Early      Sun 14 Jul 2024 17.00 BS
> > What has changed, researchers say, is both increased public attention to everything AI and genuine breakthroughs in the technology. Whether a weapon is truly “autonomous” has always been the subject of debate. Experts and researchers say autonomy is better understood as a spectrum rather than a binary, but they generally agree that machines are now able to make more decisions without human input than ever before.
> > -------------------------------------
> >
> > I asked Perplexity AI to review this long article.
> > BillK
> >
> > Perplexity AI -
> >
> > The article from The Guardian discusses the increasing integration of artificial intelligence (AI) into military weapons and operations, highlighting what some experts are calling AI's "Oppenheimer moment" - a reference to the development of the atomic bomb. This trend raises significant ethical, legal, and strategic implications for modern warfare.
> > <snip>
> > -----------------------
>
>
> As Perplexity AI insisted on the need for international cooperation to
> avoid an AI weapons arms race, I pointed out that while a war was in
> progress there would be little likelihood of international cooperation.
> Nations will not restrict weapon developments during wartime.
> War plus AI seems to indicate a bleak future.
>
> Perplexity AI grudgingly agreed, but then still added a long section
> about the urgent need for international cooperation.
>
> Below is the first part of the reply (with the long wishful-thinking cooperation
> section removed):
> BillK
>
> Perplexity AI:
> The continuation of the AI weapons arms race during ongoing conflicts
> is a significant concern, as nations prioritize military superiority
> over cooperative regulation. Here are the key points and implications
> based on the provided search results:
>
> 1. Intensification of the AI Arms Race
>
> Current State of Affairs:
> The AI arms race is characterized by rapid advancements in military
> technology among major powers like the U.S., China, and Russia. This
> competition is driven by the desire for strategic and tactical
> advantages on the battlefield, leading to the development of
> increasingly sophisticated AI-driven systems such as drones and
> autonomous vehicles.
>
> Implications:
>
> Escalation of Conflicts:
> The deployment of AI weapons can lead to faster and more automated
> warfare, potentially escalating conflicts due to reduced
> decision-making times and increased reliance on AI for strategic
> assessments.
>
> Ethical and Legal Challenges:
> The use of AI in warfare raises significant ethical and legal
> questions, including accountability for AI-driven actions and the
> potential for increased civilian casualties.
>
> 2. Challenges to International Cooperation
>
> War and Cooperation:
> During active conflicts, nations are less likely to engage in
> cooperative efforts to regulate AI weapons, as their immediate focus
> is on gaining military advantages. This competitive environment makes
> it difficult to establish and enforce international agreements or
> treaties aimed at controlling the development and use of AI in
> warfare. Implications:
>
> Lack of Regulation:
> The absence of international cooperation can lead to a lack of
> effective regulation, increasing the risks associated with the
> deployment of AI weapons.
>
> Proliferation of AI Weapons:
> Without international agreements, there is a higher likelihood of AI
> weapons proliferating, potentially falling into the hands of non-state
> actors or being used in ways that violate international humanitarian
> law.
> -----------------------
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat



More information about the extropy-chat mailing list