<div dir="ltr"><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:large;color:#000000"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small">the brass would argue that deployment and battlefield testing is the</span><br style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small"><span style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font-size:small">path to knowing more than we do now. spike</span></div><div class="gmail_default" style="font-family:"comic sans ms",sans-serif;font-size:large"><font style="font-family:Arial,Helvetica,sans-serif;font-size:small"><br></font></div><div class="gmail_default" style="font-size:large"><font style="font-size:small"><font face="comic sans ms, sans-serif">This suggests that a company who develops a product they don't know will be used safely, should just release it to the public and count the injuries and deaths. It also depends on your values - those products which are protected by plastic so tough you need an axe to get into suggests that preventing stealing is more important than personal injuries. bill w</font><br></font><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Mar 8, 2024 at 6:51 AM spike jones via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
<br>
-----Original Message-----<br>
From: extropy-chat <<a href="mailto:extropy-chat-bounces@lists.extropy.org" target="_blank">extropy-chat-bounces@lists.extropy.org</a>> On Behalf Of<br>
BillK via extropy-chat<br>
Subject: Re: [ExI] AI Warfare Is Already Here<br>
<br>
On Thu, 29 Feb 2024 at 12:47, William Flynn Wallace via extropy-chat<br>
<<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br>
><br>
> Y'know? Here's an awful thought: I wish that an AI in actual combat makes<br>
a mistake and kills civilians. Maybe that will be the spur to quit the damn<br>
things until we know a lot more than we do now. Never mind that a human<br>
makes the final decision - people make mistakes too. I am assured by<br>
history that military minds will rush new tech into use before we are ready<br>
for it.<br>
><br>
> Maybe I don't actually wish it, but it WILL happen - and then we'll see.<br>
bill w<br>
> _______________________________________________<br>
<br>
<br>
>...In war, it is always far more civilians that die than soldiers.<br>
Look at Gaza, Hiroshima, Dresden or the famines and deprivation that follow<br>
war.<br>
I don't see that a few civilian deaths (collateral damage) will stop the<br>
military deploying AI weaponry.<br>
<br>
BillK<br>
<br>
_______________________________________________<br>
<br>
<br>
<br>
Civilian deaths might drive the military in deploying AI weaponry: they will<br>
argue that AI is better at distinguishing and avoiding collateral damage<br>
than humans. That argument may compel them to take humans out of the loop:<br>
we are too slow and influenced too much by fear, panic, hatred, etc.<br>
<br>
Regarding the notion to quit the damn things until we know more than we do<br>
now, the brass would argue that deployment and battlefield testing is the<br>
path to knowing more than we do now.<br>
<br>
spike<br>
<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>