[ExI] future of warfare again, was: RE: Forking
Anders Sandberg
anders at aleph.se
Mon Jan 2 13:03:02 UTC 2012
On 2012-01-01 18:04, spike wrote:
> Agreed, but the focus of the more sophisticated modern war machinery,
> isn’t aimed at the personnel, but rather the other machines of war. The
> modern warrior has nothing against the adversary’s guys. They can have
> as many guys as they want, for without the sophisticated mechanisms of
> warfare, they are as harmless as an army of kittens.
Yes and no.
In high-school one of my friends, Fredrik, was a would-be military
officer. Just as I knew I would become a scientist of some kind he was
planning out his straight-arrow military career. We had a long running
argument about the future of the military: I was pointing out that
robotic or drone warfare would eventually become possible and more or
less take over the field, leaving the military as a bunch of nerds at
keyboards. He countered with "You are always going to need a guy on the
ground with a rifle".
He had a point. As shock and awe demonstrated, high-tech can wipe out a
low-tech military infrastructure. Drone warfare can hit enemy
concentrations and individuals with reasonable precision. But these
tools cannot occupy a country: maintaining civil order, gaining human
intelligence, instilling trust for whatever institutions you are trying
to set up, that requires personal interactions... and those guys with
rifles. Some of the more obvious failures in recent Middle East
conflicts have been due to the discrepancy between overwhelming
projectable force and lack of "social" interfacing.
It might be possible to enhance the guy with the rifle. Perhaps drone
infantry will appear in the next few decades ("I'm the neighborhood
soldier of Tohid Square. I patrol 9 to 5 US time, very convenient for
me. Sure, occasionally my bodies gets blown up, but it is mostly a
budget problem...") Maybe they can be networked in smarter ways, like in
Adam Robert's "New Model Army" (an anarchist ultra-flexible wiki-army).
Or ubiquitous surveillance systems can be used. But it still seems that
this is a major bottleneck since the complexity of the tasks is orders
of magnitude higher than in the direct attack phase.
The essence of attacking something is to prevent its function. This can
be surgical, with minimal effects on the surroundings or unrelated
functions, but you need to have plenty of information about the target
and its state. This is why maximizing the entropy of a target is so much
easier as an attack mode: you do not need much information, and hence
the attack is likelier to succeed in low-information or adversarial
information environments. We have nearly maxed out our ability to apply
entropy: the remaining big frontier is precision.
The real aim of attacking stuff is of course control. It has the same
problem in terms of information as destruction. Typically warfare aims
at prevening the function of defenses/offenses, and then tilting the
utility function of the enemy using threats so that enemies now behave
as you would like them to. When this works you effectively turn enemies
into parts of your system since they now make use of information they
know but would be hard for you to come into possession of to do things
you order. The problem is of course that you are not necessarily
changing their utilities with your threats well enough (too small
threat, too different base utility, lack of information), and you now
have to handle an unreliable system. I am reminded of the issue of
"weird machines" in computer science,
http://boingboing.net/2011/12/28/linguistics-turing-completene.html - an
occupying force are essentially interfacing with an unsecured system.
Figuring out the control problem is the *real* challenge for future
armies - the guy with the rifle is there to ensure a certain range of
behavior on the microscale. But as the above linked talk suggests, this
is likely a computationally infeasible problem. You can likely solve it
better than currently, but it can never be solved generally - and you
cannot ever be sure your solution doesn't contain some exploitable flaw.
Poor Fredrik.
--
Anders Sandberg
Future of Humanity Institute
Oxford University
More information about the extropy-chat
mailing list