The Blind Spot in Robot-Enabled Warfare: Deeper Implications of the IED Challenge
Abstract
This article argues that Improvised Explosive Devices are robots. In declining to make this connection, Western militaries have been blind to their adversaries’ use of robot-enabled warfare. The effect has been to render Western soldiers tactically and operationally reactive, and on the wrong end of attrition warfare. The resolution lies in understanding how robots are supervised, and how a robot-enabled force can enable its personnel to out-adapt their human foes.
Introduction
The rapid fielding of unmanned systems in current and recent operations has prompted an urgent call for concepts and doctrine.1 However, the extant notion of a ‘robot’ assumes that they need to be of some minimum (but unspecified) sophistication to be worthy of attention. This assumption is self-imposed by the West, and constitutes a blind spot on how robots can be constructed and employed.
This article describes a space of possibilities for robot-enabled warfare, and locates the West’s blind spot. Within this blind spot, the West’s adversaries are already exploiting robot-enabled warfare for structural advantage. That said, the West can still implement robot-enabled warfare for comparable advantages, without compromising existing strengths. The key is to understand how robots are actually an enabler to agility, and that robot-enabled warfare is inherently about out-adapting the human foe.
What is a 'Robot'?
We conceptualise a ‘robot’ in a manner that recognises both the technical and philosophical viewpoints. From a technical viewpoint, a ‘robot’ is a programmable machine that can sense and manipulate its environment.2 A robot is intermittently programmed by one or more operators, in what engineers call supervisory control.3 Supervisory control defines the robot’s degree of autonomy. This can be quantified as the time intervals between the operators’ supervision of the robot. Short time intervals correspond to low autonomy, while longer time intervals correspond to higher autonomy. Informally, when autonomy is low, the human supervisor is ‘in the loop’, while ‘on the loop’ corresponds to high autonomy.4
In the philosophical schools of action and agency, autonomy has stronger connotations, relating to ‘intentionality’ and ‘free will’.5 We can address this by introducing the notion of self-supervision. A self-supervising robot is one that can conduct supervisory control on itself. Self-supervision is thus more than ‘supervision at infinite autonomy’. If a human programs and deploys a robot, but then never visits it again, the robot is being supervised at infinite autonomy.
To be self-supervising, the robot needs to have the capacity to inspect and rewrite programs, and the program needs to be able to take itself as its own input (without selfdestructing). Currently, there are no known working examples of self-supervising robots, only thought experiments and fiction (for instance, the robots depicted in the Terminator movie series).6 We can place robots on a spectrum, with human-supervised robots ranging from zero up to infinity under the technical definition of autonomy, and then a ‘beyond-infinity’ class for self-supervising robots.
Robot-Enabled Warfare and Killer Robots
Robot-enabled warfare is the application of robotics and automation to warfare, and especially to reach forms of warfare inaccessible to forces not equipped with robots. One form of robot-enabled warfare is to fit the robots with sensors and weapons, and assemble them into sensor-shooter systems. The defining characteristic of a so-called killer robot is where the robot closes the firing loop from sensor to shooter. To emphasise an earlier point: self-supervising killer robots are a matter of fiction at this time. This may provide comfort to critics of military robotics.7
The potential from robotics is popularly summed up as Three Ds: Dull, Dirty and Dangerous. For a human-supervised robot, autonomy captures the spirit of Dull. That is, assign the robot to the Dull task, and free the human to do something else. An example is a radar warning receiver, in replacing gunners as employed on Second World War aircraft. The Dull (but stressful) aspect was in scanning the skies for enemy fighters, and warning the pilot to evade. A Second World War bomber typically dedicated two gunners to this task, while a modern aircraft uses electronics. Table 1 gives further examples.
Table 1: Examples of killer robots at increasing autonomy.
System | Weapons | Sensors | Autonomy |
Sentry Tech | 0.5 cal. 7.62mm machine gun, anti-tank missiles | Electro-optical / Infrared with night vision | Zero autonomy (remote control), sentry operations on the Gaza strip. |
Special Weapons Observation Reconnaissance Detection System (SWORDS) | Small arms from 5.56mm to 66mm | Electro-optical / Infrared with night vision | Zero autonomy (remote control), in infantry operations. |
VIPeR | Small arms | Video camera | Zero autonomy (remote control), in infantry operations. |
Predator, Reaper, Sky Warrior | Hellfire missile | Electro-optical / Infrared with night vision | Zero autonomy (remote control), direct attack and air support missions for counterinsurgency. |
HARM (Lock-On After Launch) | Fragmentation warhead |
Passive radar seeker, Laser- proximity/ impact fuse |
Autonomy on the order of tens of seconds in Lock-On After Launch. Fired into airspace over a suspected air-defence site, HARM will home in on the radiation emitted by radars. |
ASRAAM, AIM-9X | Fragmentation warhead | Imaging Infrared seeker, Laser-proximity / impact fuse | Autonomy on the order of tens of seconds in Lock-On After Launch. Launched as a fire-and- forget anti-aircraft missile. |
ALARM (Loitre) | Fragmentation warhead |
Passive radar seeker, Laser-proximity fuse |
Autonomy on the order of minutes in Loitre mode. Like HARM, ALARM targets the radiation emitted by radars. If the target shuts down their radar, ALARM will loft to altitude and deploy a parachute. If the target then starts up, ALARM will re-attack. |
Aegis Air-Warfare Combat System (Auto Special) | SM-2, SM-6 surface-to-air missiles | SPY-1 radar | Autonomy on the order of minutes, the interval between activating the ‘Auto Special’ mode. Designed as part of multi-layer defence of US carrier battlegroups against multiregiment Backfire raids. |
SGR-A1 | M249 Squad Automatic Weapon | Colour camera | Autonomy on the order of minutes to hours, over some patrolling time. Deployed overlooking the Korean Demilitarised Zone. |
Phalanx / Centurion Close-In Weapon System |
20mm cannon | Radar, infrared | Autonomy on the order of minutes to hours, over some patrolling time. Deployed for last-ditch defence against missiles, rockets or artillery. |
Captor mine | Mk 46 torpedo | Acoustic | Autonomy on the order of hours to days, deployed into contested waters. |
We can then think of Dirty and Dangerous as a distance between the robot and its supervising human (if it has one). An example is a robot used by Explosive Ordnance Disposal technicians to inspect and disarm a suspected bomb. Historically, the technician would have had to work well within the potential blast radius of the bomb. The robot enables the technician to place distance between themselves and the immediate danger. Table 2 gives further examples.
Distance and autonomy are distinct. We can have robots operating at low or high autonomy from their humans, and also at short to long distances. For instance, if we compare Table 1 and Table 2, we see SWORDS and Reaper both being operated at zero autonomy, but at vastly different distances. Meanwhile, Aegis and CROWS are operated at similar distances, but at very different autonomies.
We can thus think of robot-enabled warfare as having (killer) robots deployed at some distance from their supervising humans, and at some degree of autonomy. Robot-enabled warfare is about exploiting both of these dimensions.
Table 2: Examples of killer robots at increasing distance.
System | Weapons | Sensors | Distance |
Aegis Air-Warfare Combat System (Auto Special) | 20mm cannon | Radar, infrared | Essentially zero distance. Aegis is supervised by the crew of warship. |
Crew Remotely- Operated Weapon Station (CROWS) | Machine Gun | Video camera, thermal imager, laser rangefinder | Distance on the order of one metre, from the crew station in a vehicle to the external CROWS mount. |
PackBot | Bomb Disposal Kit | Electro-optical camera, chemical vapour sniffer | Distance on order of tens to hundreds of metres. |
Special Weapons Observation Reconnaissance Detection System (SWORDS) | Small arms from 5.56mm to 66mm | Electro-optical / Infrared with night vision | Distance on order of tens to hundreds of metres. |
Predator, Reaper, Sky Warrior | Hellfire missile | Electro-optical / Infrared with night vision | Distance on order of thousands of kilometres. Current operations see aircraft over Afghanistan flown by operators in the continental US. |
The Blind Spot
The blind spot manifests in the following question: Is the simple land mine a form of killer robot? I contend that the answer is ‘yes’, and that answering ‘no’ constitutes a blind spot. There are two objections that have been raised by other commentators, and I address them here.
The first objection is that a mine is only ‘triggered’. They suggest that attention be restricted to entities that can ‘decide’ to kill. 8 However, to disregard the mine as a killer robot is to dismiss all human-supervised killer robots. The delineation of ‘triggered’ from ‘decides’ merely repeats the line drawn between ‘autonomy’ and ‘intentionality’. Having distinguished between human-supervised robots and self-supervising ones for autonomy versus intentionality, we do not need a new classification for triggered versus decides.
The second objection is that a mine has an unsophisticated mechanism for closing the firing loop.9 This confuses mode-of-operation with effectiveness. Different killer robots may use different sensors, or have lesser or more sophisticated algorithms for finding, tracking and engaging a target. Moreover, the West has learned that unconventional does not mean ineffective.10 We ignore unsophisticated possibilities at our peril.
To build a killer robot, operating in some environment and at some autonomy, we only need sensors, weapons and sensor-to-weapon technologies of sufficient performance. If we are prepared to accept this premise, then we can see how the West’s adversaries are operating within the blind spot.
How are the Coalition's Adversaries Exploiting Robot-Enabled Warfare?
Robot-enabled warfare is being employed today by al-Qaeda and the Taliban against Coalition forces in Iraq and Afghanistan. The implementation is via the car bomb, the roadside bomb and other forms of Improvised Explosive Device (IED).11
The IED satisfies the critical requirement for being a ‘killer robot’, in closing a firing loop from sensor to shooter. In this case, the ‘shooter’ is an explosive, and the firing loop is closed by some form of trigger. The components need not be sophisticated. IED have been assembled from plastic explosive, blasting caps used for mining, or old artillery shells. Triggers have included washing machine timers, doorbell buzzers and parts from radio-controlled toy cars, with input from cell phone calls, pressure plates or passive infrared signals.12 In 2007, the US military reported that the insurgents in Iraq had developed some ninety ways to trigger an IED.13
Once emplaced, the IED can wait autonomously to complete its mission, often without further intervention from the bomb-layer. At most, the IED might be remotely-detonated by the bomb-layer (a low-autonomy IED), otherwise it will be victim-activated (a high-autonomy IED).
The IED has been acknowledged by the Pentagon as ‘the single most effective weapon against our deployed forces’.14 This reflects the warfighting advantages that a robot-enabled force can gain over a non-robot force, to include:
- Low-Cost Attrition Warfare. A robot-enabled force can impose attrition warfare upon the enemy at low cost to the force. The key is to have robots that are cheap to assemble and deploy, compared to the effect that they create and the cost of countermeasures. The materials and skills for constructing an IED are readily obtained and replaced,15 with each IED having an estimated per-unit cost in the hundreds of dollars.16 For comparison, in 2004, an armoured Humvee cost roughly US$150,000, or US$1.8 billion to replace every Humvee then deployed in Iraq.17 Similarly, the annual budget for the US Joint IED Task Force (now US Joint IED Defeat Organization) rose from US$100 million in 2004 to US$1.3 billion in 2005.18
- Seizing the Initiative. A robot-enabled force can distract its enemy from central goals into defeating the robots. The key is in the robots’ autonomy; the enemy has to respect the robot as a threat, and thus split their attention between the robot and its human supervisor. Then, while the enemy is diverted by the robot, the human can make their next move. The distraction can be applied at all levels of conflict, from tactical through operational to strategic. As a soldier on the ground reacts to a recent or imminent IED, the bomb-layer may already be steps ahead in their playbook.19 Furthermore, al-Qaeda and the Taliban could be emplacing IED to sit undetected for years.
The counterinsurgency in Iraq and Afghanistan is, of course, more than the IED. However, we cannot address the deeper causes without acknowledging the symptoms, with the impact upon the West’s forces and those whom they seek to protect.20 The IED has certainly helped the Taliban and al-Qaeda to remain a viable adversary, and has arguably been decisive in doing so. Rather than scoffing at the IED as being an ‘idiotic technology’ (to quote one US general), 21 we might regard the Taliban and al-Qaeda as being robot-enabled forces. This unfettered perspective invites us to consider other precedents for robot and counter-robot concepts, from other ‘non-conventional’ conflicts.
Influence of the Blind Spot on Current Concepts
As a result of the blind spot, Western militaries have failed to recognise the autonomy dimension to robot-enabled warfare, and have failed to recognise the unique capabilities that their personnel have over robots. Currently, the West is headed down a path of tele-warfare—warfare fought at long distance. Tele-warfare is exemplified in remote-controlled systems like the Predator Unmanned Aerial Vehicle or the PackBot Explosive Ordnance Disposal robot, placing distance between a warfighter and the threat. This exploits the distance dimension, but not the autonomy dimension.
The West’s weakness in the autonomy dimension is characterised by fighting- in-the-now, as compared with fighting-in-the-future. Continuously watching a full-motion video feed (‘Predator crack’)22 to find, fix and track a target is fighting-in-the-now. To fight-in-the-future is to think about courses of action to take if a target is found, or if a target acts in a certain way. Fighting-in-the-now is ultimately reactive to the enemy—wait for something to happen, and then scramble to respond or counter. At best, fighting-in-the-now is a recipe for being highly stressed when events transpire.23 The better alternative is to be proactive and fight-in-the-future—to think about what could happen, and then institute systems to shape or hedge.
The irony is twofold. First, humans are uniquely capable of being ‘on the loop’ to robots, with the human fighting-in-the-future to supervise the robot. Second, robots can fight-in-the-now at speeds and with effects that humans cannot match. Tele-warfare fails to recognise these two unique capabilities, and puts humans ‘in the loop’, fighting-in-the-now. Western militaries should recognise this situation, and reverse it.
Design Principles for Robot-Enabled Warfare
An al-Qaeda or Taliban insurgent can, within a matter of days, use modern, commercially-available technologies to construct and field a killer robot (IED). Said robot will be fit for its tactical, operational and strategic purposes. How does the West afford the same agility to its warfighters? In posing this question, we can postulate the West’s requirements from its robot-enabled force structure:
- Enable human supervisory control at maximum possible autonomy. Combat systems should support the warfighter to fight-in-the-future, and to have effective oversight of the robot as it fights-in-the-now. This is consistent with design for command and control in the large,24 in dynamically crafting the robot to the commander’s intent, and controlling the risks of non-combatant or friendly casualties. Commanders need not be ‘in the loop’, micromanaging the robot’s every action; rather they should be ‘on the loop’ of employing the robot to achieve the mission.
Systems design must cover both the technology and infrastructure of the human and robot, and the skills and training. A key element is likely to be in ethics, as part of ensuring that the robot has the ‘right’ program. Ethics has always been implicit in placing the weapon into a soldier’s hands, but now the weapon can be dislocated from the soldier in both space and time. Robot-enabled warfare might thus serve as a rallying point for attaining focus and cohesion in military ethics programs.25
- Enable tactical innovation of the robot’s construction and programming. We need to regain the idea of edge applications, an idea proposed as part of network-centric warfare but subsequently watered down. Edge applications are constructed from services provided by the network, and tailored by individuals to their immediate needs at the ‘edge’.26 For the killer robot as an edge application, warfighters might draw on data and algorithms for automated target recognition,27 or weapons including less-than-lethal options.28
In contrast, the current acquisition processes are preoccupied with getting equipment into the field at all, let alone the idea that warfighters might assemble systems in ways not predicted when requirements were specified. Major infrastructure takes decades to acquire and deploy, and so-called ‘rapid’ acquisition can still take months.29 This is too slow to match the 14–30 day cycle seen in contemporary conflicts, in the intellectual battle between system and counter-system.30
These principles do not require new acquisition of major systems or infrastructure. They merely reconceptualise how systems can be assembled for warfighting effect.
Conclusion
It may be difficult to accept that the West is currently on the receiving end of robot- enabled warfare. The alternative is to attempt to prevail over an adversary that is fighting from a blind spot. That is not to say that the West should copy or even mirror adversary practices. Rather, it invites us to a deeper consideration of how capabilities can be built, and the assumptions behind the concepts for employment.
About the Author
Dr Patrick Hew is a Lead Systems Scientist with the Defence Science and Technology Organisation. He works in a multi-discipline team that develops warfighting concepts for the Australian Defence Force, leveraging networks and emergent technology. His PhD is in robotics, and he was embedded in Capability Development Group during 2006. Dr Hew has been invited to the Association of Unmanned Vehicle Systems International 2010 conference as a panellist on the Ethics of Armed Unmanned Systems.
Endnotes
1 P W Singer, ‘Wired for War? Robots and Military Doctrine’, Joint Forces Quarterly, Iss. 52, 2009, pp. 104–10.
2 Armin Krishnan, Killer Robots: Legality and Ethicality of Autonomous Weapons, Ashgate, Surrey, 2009, p. 4.
3 Thomas B Sheridan, Telerobotics, Automation, and Human Supervisory Control, MIT Press, Cambridge, 1992, p. 1.
4 United States Air Force Unmanned Aircraft Systems Flight Plan 2009–2047, Headquarters, United States Air Force, Washington DC, 18 May 2009. Supervisory control is the USAF preferred concept for the command and control of future unmanned systems. ‘Man on the loop’ is mentioned synonymously with supervisory control on p. 14.
5 Krishnan, Killer robots: Legality and Ethicality of Autonomous Weapons, pp. 33, 43, 132–33.
6 See Peter W Singer, Wired for War: The Robotics Revolution and Conflict in the Twenty- first Century, The Penguin Press, New York, 2009; and Krishnan, Killer Robots: Legality and Ethicality of Autonomous Weapons.
7 Daniel L Davis, ‘Who decides: Man or machine?’, Armed Forces Journal, November, 2007 <http://www.armedforcesjournal.com/2007/11/3036753> accessed 25 September 2009.
8 Krishnan, Killer Robots: Legality and Ethicality of Autonomous Weapons, p. 33.
9 Patrick Lin, George Bekey and Keith Abney, Autonomous Military Robotics: Risk, Ethics, and Design, California State Polytechnic University, California, 20 December 2008, Appendix A.
10 Kenneth C Coons and Glenn M Harned, ‘Irregular Warfare is Warfare’, Joint Forces Quarterly, Iss. 52, 2009, pp. 97–103.
11 R Atkinson, ‘Left of Boom: The Struggle to Defeat Roadside Bombs in Iraq and Afghanistan’, The Washington Post, 2007 <http://www.washingtonpost.com/wp-srv/world/specials/leftofboom/index.ht…; accessed 29 July 2009.
12 R Atkinson, ‘The IED problem is getting out of control. We’ve got to stop the bleeding’, The Washington Post, 30 September 2007 <http://www.washingtonpost.com/wp-dyn/content/article/2007/09/29/AR20070…;
13 Singer, Wired for War: The Robotics Revolution and Conflict in the Twenty-first Century, p. 218.
14 R Atkinson, ‘The single most effective weapon against our deployed forces’, The Washington Post, 30 September 2007 <http://www.washingtonpost.com/wp-dyn/content/article/2007/09/29/AR20070…;
15 Greg Grant, ‘The IED Marketplace in Iraq’, Defense News, 3 August 2005.
16 Greg Grant, ‘Behind the Bomb Makers’, Defense Technology International, January/ February 2006, pp. 30–32, <http://www.nxtbook.com/nxtbooks/mh/dti0206/index.php?startpage=30> accessed 26 August 2009.
17 Michael Moran, ‘Frantically, the Army tries to armor Humvees’, msnbc.com, 15 April 2004, <http://www.msnbc.msn.com/id/4731185/> accessed 29 July 2009.
18 Atkinson, ‘The IED problem is getting out of control. We’ve got to stop the bleeding’.
19 Ibid. Also see Grant, ‘The IED Marketplace in Iraq’, on the use of hoax IED to prompt reactions from Coalition forces.
20 J M Beach, ‘Confronting the Uncomfortable: Western Militaries and Modern Conflict’, GeoJournal, Vol. 34, No. 2, October 1994, pp. 147, 153.
21 Singer, Wired for War: The Robotics Revolution and Conflict in the Twenty-first Century, pp. 218–20.
22 Samuel D Bass and Rusty O Baldwin, ‘A Model for Managing Decision-Making Information in the GIG-Enabled Battlespace’, Air & Space Power Journal, Summer 2007, pp. 100–08.
23 M L Cummings, Automation Bias in Intelligent Time Critical Decision Support Systems, American Institute of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, 2004.
24 Pigeau and McCann define command as ‘the creative expression of human will necessary to accomplish the mission’ and control as ‘those structures and processes devised by command to enable it and to manage risk’. Ross Pigeau and Carol McCann, ‘Re-conceptualizing Command and Control’, Canadian Military Journal, Vol. 3, No. 1, Spring 2002, pp. 53–64.
25 Jamie Cullens, ‘What Ought One To Do? Perspectives on Military Ethics Education in the Australian Defence Force’, Chapter 7 in Paul Robinson, Nigel De Lee and Don Carrick (eds), Ethics Education in the Military, Ashgate Publishing, Surrey, 2008.
26 David S Alberts and Richard E Hayes, Power to the Edge: Command and Control in the Information Age, Command and Control Research Program, 2003: Chapter 10 for the Global Information Grid, Chapter 12 for the agility of individuals versus hierarchies.
27 Special Edition on Automatic Target Detection and Recognition, IEEE Transactions on Image Processing, Vol. 6, No. 1, January 1997; J A Ratches, C P Walters, R G Buser, et al, ‘Aided and Automatic Target Recognition Based Upon Sensory Inputs From Image Forming Systems’, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 9, September 1997, pp. 1004–19.
28 John S Canning, ‘You’ve Just Been Disarmed. Have a Nice Day!’, IEEE Technology and Society, Vol. 28, No. 1, Spring 2009, pp. 12–15.
29 Network-Centric Warfare Roadmap 2007, Department of Defence, Canberra, 2007, Sections 1-2, 7-1, 9-1. The Australian Defence Force’s ‘network dimension’ is to be deployed once on a timeline from 2007 to 2017. There is some capacity for limited (‘operationally focused’) creation of custom solutions via Rapid Prototyping Development and Evaluation, but on a timeline of 6–18 months.
30 Singer, Wired for War: The Robotics Revolution and Conflict in the Twenty-first Century, p. 218; Vice Chairman of the Joint Chiefs of Staff General James Cartright, quoted in Michael Peck, ‘Missile shift: Obama’s focus on new types of missile raises C2 questions’, C4ISR Journal, Vol. 8, No. 8, September 2009, pp. 22–24.