Robot-enabled warfare: How the bad guys turned Army’s ethos against itself
Close combat is Army’s strength and its foundational principle, but recent operations show how it can be turned into a weakness if we fail to acknowledge how adversaries use robot-enabled warfare.
In helping Army and the Australian Defence Force grapple with the potential of robotics and automation, I have been struck by a recurring 'blind spot': a reluctance to see improvised explosive devices as robots. On the surface, there are apparent differences between (say) an artillery shell rigged up with a tripwire and detonator and a Predator unmanned aerial vehicle, or the various robots being demonstrated in the laboratory. These differences do not matter when conceptualising the threat.
Stripped to essentials, robotic/automated systems can be characterised as systems where sensors can trigger effectors without waiting for a human. Contemporary jargon says that the robot can 'close a loop' from sensors to effectors, meaning humans are not 'in' the loop. It may be of some comfort that, on foreseeable technologies, there will be at least one human 'on' the loop (see sidebar below on moral responsibility for robots on foreseeable technologies).
It may indeed be accurate to say that 'the most effective employment of unmanned vehicle systems has, to date, been against irregular enemies' if we constrain our attention to the Western-styled military forces deployed in operations post-2001. But under our widened perspective of robotics and automation, we might wonder whether the most effective employment of robotic systems has actually been by irregular forces against regular enemies; that is, by al Qaeda, the Taliban and others who have built robots from the resources available to them, and employed them as 'the single most effective weapon against our deployed forces'.
Experience against improvised explosive devices in recent operations therefore asks an important question: What is it like to be on the receiving end of robot-enabled warfare? One (of many) answers is that it turns Army's ethos of close combat into a weakness. By human standards, an improvised explosive device is a terrible close combatant. Making it mobile and equipping it with better sensors and weapons might build a slightly less-terrible close combatant. But this doesn't matter. It merely has to be good enough to fixate our soldiers' attention while the humans behind the robot get on with what they want to do.
Army must excel at close combat, otherwise adversaries will have a haven from which to operate. But Army must also recognise adversaries' robots for what they are – a means for the adversary to operate from depth, at reduced risk. Their technology might currently be crude, but it is robot-enabled warfare on their terms and not ours.
Sidebar – Responsibility for robots on foreseeable technologies: 'On' the loop is the colloquial term for what technologists call 'supervisory control', and corresponds to military concepts of 'mission command' and 'command by negation'. In an article I wrote in 2014, I argued that, on foreseeable technologies, moral (ethical) responsibility for a robot will be held by the humans who are 'on' the loop. Said humans include the soldier who deploys and operates the robot, and the people who built it. There is ambiguity on how responsibility is apportioned across those persons, just as there is ambiguity in the amount of responsibility that should be attributed to firearm manufacturers vis-à-vis firearm users. The key point, however, is that the robot itself holds zero responsibility. Robot command/control systems need to be designed so that the responsible humans can execute their responsibilities effectively.
This forum posting expands on an earlier article published in the Australian Army Journal (pdf, p.45).
The views expressed in this article and subsequent comments are those of the author(s) and do not necessarily reflect the official policy or position of the Australian Army, the Department of Defence or the Australian Government.