Autonomous weapon systems, international law and meaningful human control
Abstract
The rapid advances in robotic technologies and the successful use of existing unmanned and autonomous platforms has generated significant debate on the use of autonomous weapon systems (AWS). The debates surrounding AWS have centred primarily on legal and ethical concerns and also whether machines can ever emulate the psychology of the human decision-making process. Incredibly, this discourse occurs in the absence of a common or accepted legal definition of ‘AWS’, including what criteria or standard should be used to determine the degrees or levels of autonomy. However, there is recognition and acceptance of the importance of retaining ‘meaningful human control’ in the employment of AWS, particularly in the critical functions of the selection and prosecution of targets. This article will discuss whether a national policy developed by Australia should expressly articulate the requirement for meaningful human control, the development of an international regulatory regime for AWS and whether any changes to international law are required.
Introduction
The rapid advances in robotic technologies and the successful use of existing unmanned and autonomous platforms has generated significant debate on the use of autonomous weapon systems (AWS).1 The amount of discourse generated on AWS is not surprising given the increased military interest in employing these systems and the interest groups which are concerned about the military’s use of such systems. Indeed there are even human rights groups, academics and security experts who have called for an outright ban on the use of AWS.2 While weapon systems with significant autonomy in target selection and attack are already in use, fully autonomous systems that independently determine their actions and make complex decisions based on their environment do not exist.3 In fact, such technological capability is unlikely to be fully developed in the foreseeable future.4 The debates surrounding AWS have centred primarily on legal and ethical concerns and also whether machines can ever emulate the psychology of the human decision-making process. Incredibly, this discourse occurs in the absence of a common or accepted legal definition of ‘AWS’, including what criteria or standard should be used to determine the degrees or levels of autonomy. However, there is recognition and acceptance of the importance of retaining ‘meaningful human control’ in the employment of AWS, particularly in the critical functions of the selection and prosecution of targets.5 Again, there is more debate and no clarity on what constitutes ‘meaningful human control’.
A recent Senate inquiry into the ‘Use of unmanned air, maritime and land platforms by the Australian Defence Force’ made a number of recommendations, including that the Australian government make a policy statement on the use of armed unmanned platforms6 and that it support international efforts to establish a regulatory regime for AWS, including those associated with unmanned platforms.7 This article will discuss whether a national policy developed by Australia should expressly articulate the requirement for meaningful human control, the development of an international regulatory regime for AWS and whether any changes to international law are required. First, however, it is important to define precisely what constitutes an AWS.
Autonomous weapon systems
Currently there is no agreed definition of an AWS, although it has been defined according to the level of human supervision and/or input over target selection and attack. For example, the United States (US) Department of Defense refers to ‘autonomous weapon system’, ‘human supervised autonomous weapon system’ and ‘semi-autonomous weapon system’.8 Human Rights Watch uses the terms ‘human-in-the-loop’, ‘human-on-the- loop’, and ‘human-out-of-the-loop’ which are defined according to the level of human input and supervision.9 Other definitions have also been provided by the United Nations (UN)10 and the International Committee of the Red Cross (ICRC).11
Any mention of AWS often automatically conjures images of drones or unmanned aerial vehicles (UAV). However, according to the ICRC definition of AWS, the current use of drones and UAVs does not fall within the ICRC’s AWS definition as targeting and firing is performed remotely by a human operator. An examination of the various definitions by the ICRC found that common to all is ‘the inclusion of weapon systems that can independently select and attack targets with or without human oversight’ and the ‘exclusion of weapon systems that select and attack targets only under remote control by a human operator’.12 For the purposes of its summit on ‘Autonomous Weapon Systems: Technical, Military, Legal and Humanitarian Aspects’ in 2014, the ICRC defined AWS as ‘weapons that can independently select and attack targets, i.e. with autonomy in the “critical functions” of acquiring, tracking, selecting and attacking targets’.13 That definition is adopted for the purposes of the discussion in this article.
Meaningful human control
The notion of meaningful human control has gained increasing attention and focus, with some advocating for it to be installed as a legal norm.14 This is a phrase first used by Article 36, a British non-government organisation which argued that lethal decision-making should require ‘meaningful human control’.15 The Convention on Conventional Weapons (CCW) held its first meeting on autonomous weapons from 13 to 16 May 2014. The meeting was attended by delegations from 87 countries, the UN, ICRC, interest groups and independent experts and academics. During this meeting, meaningful human control emerged as a major theme. Austria, Croatia, Germany, Norway and Switzerland strongly supported a requirement for human control over individual attacks.16 The appeal of the notion of meaningful human control lies in its ability to address the legal and moral issues surrounding the use of AWS, namely:
- the accountability gap that is created when AWS behave in an unpredictable manner, particularly when systems become more complex and operate in more complex operational environments for extended periods17
- the delegation of moral responsibility for killing to machines18
- the inability of machines to conduct qualitative decision-making in complying with international humanitarian law19
What is meant by ‘meaningful’ does not appear to be significant. An Article 36 briefing paper emphasised:
It should be noted that whilst this paper uses the term ‘meaningful human control’ there are other terms that refer to the same or similar concepts. These include ‘significant’, ‘appropriate’, ‘proper’, or ‘necessary’ ‘human judgement’ or ‘human involvement’.20
Having examined both Article 36’s policy paper and the International Committee for Robot Arms Control’s statement on meaningful human control, Horrowitz and Scharre conclude that informed action is central to meaningful human control.21 While just how much information is required will depend on the circumstances of a particular use of an AWS, it needs to be sufficient for a person to make an informed decision on the lawfulness of an action.
An examination of the current use of less controversial weapons assists in understanding what it is about AWS that raises concerns over meaningful human control. There are three essential components of meaningful human control:
- human operators make informed, conscious decisions on the use of weapons
- human operators have sufficient information to ensure the lawfulness of their action on the basis of what they know about the target, the weapon, and the context
- the weapon is designed and tested, and human operators are properly trained to ensure effective control over the use of the weapon22
There are two different views on where and how meaningful human control fits into the existing framework for weapons review and the law of armed conflict (LOAC).23 According to the first view, meaningful human control is not an additional requirement as it is assumed that the existing rules that determine whether the use of a weapon is legal do not make a distinction as to whether it is a human who makes the attack directly or an AWS that selects and engages targets on its own — it is merely a principle to be considered in the design and use of AWS. The alternative view is that meaningful human control is a new addition to the law, essentially a new principle of LOAC on par with proportionality, distinction and military necessity. This latter view asserts that the existing principles are insufficient to address concerns over the use of AWS. However, the existing law is clearly sufficient. The next section of this article will describe how, in order to comply with the existing law, commanders and users of AWS will invariably inject human control into the decision-making processes from acquisition to use of AWS.
Law of armed conflict and meaningful human control
Those calling for a ban on AWS have also sought additional treaty law. However, LOAC already provides a legal framework sufficient for the regulation of the use of AWS. Article 36 of Protocol I requires that:
… in the study, development, acquisition or adoption of a new weapon, means or methods of warfare, a High Contracting Party is under no obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.24
Weapon systems that are autonomous are not illegal per se under the three rules applied when conducting a weapons review:
- the weapon system cannot be indiscriminate in nature
- the weapon system cannot be of a nature that will cause unnecessary suffering or superfluous injury
- the harmful effects of the weapon must be capable of being controlled25
An AWS can be pre-programmed with sufficient parameters to allow it to discriminate and target on the same legal terms that would apply to a human soldier, particularly if operating in a non-complex operating environment and given the limitations of current robotics technology. The rules against unnecessary suffering and harmful effects can also be met by programming the AWS to attack using only certain weapons systems. As a result, AWS remain legal and fit the parameters established by the Australian Department of Defence.26
Even if a weapon is deemed legal, under Article 36 it must still comply with LOAC relating to targeting. Targeting law concerns the use of lawful weapons. It includes three principles: distinction, proportionality and the requirement to take precautions in attack.
The principle of distinction requires that a distinction is made between combatants and non-combatants and between military and civilian objects.27 The ability of an AWS to make these distinctions will vary depending on the operational environment and context and the technological capability of that weapon system including the complexity of the computer algorithms and data sets. Clearly, the ability of an AWS to comply with the principle of distinction will depend very much on technological advancements. That said, Thurnher points out that there may be ‘situations in which an autonomous weapon system could satisfy this rule with a considerably low level ability to distinguish between civilian and military targets.’28 It would require far more complex technology for a machine to make such distinctions in an urban environment.
Proportionality requires that anticipated civilian harm is not excessive when weighed against the reasonably anticipated concrete and direct military advantage.29 Similar to the principle of distinction, there are operational circumstances in which civilian presence is unlikely, such as a battle waged in open desert where there are no civilian inhabitants and under-sea anti- submarine operations. It would be difficult for a machine to apply the proportionality test in an urban environment. That said, the ability of humans to apply this ambiguous test is also questionable.30 There is no precise formula and its assessment relies heavily on the judgement of the human soldier.
The principle of precautions in attacks requires feasible precautions to be taken in an attack to reduce harm to civilians.31 What is feasible is determined by the commander and is usually addressed in the planning for an overall attack rather than a decision made at the tactical level.32 Assessing the precautions at the stage of planning and programming a machine would be sufficient to comply with the principle if the ‘planning assumption’ remains valid for the duration of the AWS’s deployment.33
The AWS currently in use are employed in less complex environments. However, advances in technology will see a push to use these systems in more varied environments where civilians are more likely to be present. Generally, before a weapon system — autonomous or not — is employed, commanders will continue to exercise judgement concerning all the factors relevant to assessing the three targeting principles. These may include the likelihood of civilian presence, the expected military advantage, the anticipated harm to civilians, the weapons’ capabilities and the limitations of the weapon system. Therefore, whether legal considerations are met in a particular attack will go beyond an assessment of a machine’s programming and technical abilities to include human judgement in making the decision to use the machine for the particular attack in the first place. For these reasons, Kenneth, Reisner and Waxman conclude that:
… there is no reason, in principle, why a highly automated or autonomous system could not satisfy the requirements of targeting law. Like any otherwise lawful weapon, it depends on the use and the environment.34
At which stage of the process — planning, programming, execution — human control is to be injected to the extent of being considered ‘meaningful human control’, will depend on the particular AWS used and the operational context. Accordingly, it would be difficult to define meaningful human control for all permutations of battlefield scenarios. Theoretically, meaningful human control is not and need not be a separate and additional principle to the three fundamental principles of LOAC. Meaningful human control is, in practical terms, already considered in the current review and use of AWS. The application of meaningful human control will naturally occur as commanders ensure that their plan and their execution of that plan, including the use of AWS, will satisfy the requirements of LOAC. However, the inclusion of guidance for the use of AWS in organisational or national policy would be extremely helpful for commanders at all levels. The nature of this guidance will be determined by the capabilities and limitations of the particular AWS. Indeed, the Senate inquiry was ‘not convinced that the use of AWS should be solely governed by the law of armed conflict, international humanitarian law and existing arms control agreements’35 and was of the view that the ‘development of an additional protocol to the CCW is likely to be the most appropriate multilateral avenue to regulate the use of AWS, including those on unmanned platforms.’36 Will the Australian government or the Australian Defence Force (ADF) adopt the notion of meaningful human control in any policy or international position it develops on AWS? It should certainly do so as a means of ensuring that commanders comply with LOAC. An international position formalised in a treaty will not only ensure that all other states comply with their LOAC obligations but will also provide a level playing field.
The Senate inquiry
The Senate inquiry proved timely given the increasing use of military unmanned platforms, UAVs by the US, the proliferation of UAV capability and ADF use of unmanned platforms. Indeed, the 2013 Defence White Paper asserted that the ‘importance of unmanned air, maritime and land platforms to future ADF operations and the future force needs further investigation.’37 The Australian government is clearly interested in the growth of Defence capabilities in the near future and has committed to return the Defence budget to 2% of Gross Domestic Product within the next decade.38 In a 2014 paper, the Lowy Institute identified that ‘defence systems need to be either automated, or autonomous’ in order to respond to the increased tempo of conflict.39 The Defence Science and Technology Organisation (DSTO) includes AWS in its DSTO Cyber Science and Technology Plan which incorporates it in its vision for the future. Autonomous systems have been identified as one of five foundational research themes and indicative research activities include ‘artificial intelligence, machine learning, automated reasoning and planning under uncertainty; human machine partnerships’.40 Within the Australian Army, Project LAND 40041 and LAND 302542 may see the development of unmanned ground vehicles which include some form of AWS to promote survivability. It is clear that AWS is a potential capability in which the government is willing to invest.
Part of the Senate inquiry report is dedicated to a discussion of AWS and unmanned platforms and includes a reference to the US Department of Defense policy statement on AWS, including manned and unmanned platforms, and guided munitions.43 Numerous submissions were made, including those made by Defence, the ICRC, and academics. Defence submitted that:
It is theoretically possible that an unmanned system with sufficient processing power and a library of threat signatures could be armed and programmed to apply lethal force autonomously. The ADF will embrace semi-autonomous systems where that capacity can save lives or reduce exposure … but where lethal force is involved a trained operator will remain responsible for the application of that force [emphasis added].44
The Senate concluded that:
- ‘… until there is sufficient evidence that AWS are capable of rigid adherence to the law of armed conflict their development and deployment should be appropriately regulated.’45
- ‘The committee is not convinced that the use of AWS should be solely governed by the law of armed conflict, international humanitarian law and existing arms control agreements.’46
- ‘The development of an additional protocol to the CCW is likely to be the most appropriate multilateral avenue to regulate the use of AWS, including those on unmanned platforms.’47
- ‘Australia should form and advocate a considered position which supports the eventual establishment of international regulation on the use of lethal force by AWS.’48
- ‘[having noted the US Department of Defense policy directive on AWS] the committee considers the ADF should review its own policy directives to assess whether a similar policy directive on AWS, or amendments to existing policies, are required.’49
The inquiry committee made two recommendations in relation to AWS:
Recommendation 7
8.33 The committee recommends that the Australian Government support international efforts to establish a regulatory regime for autonomous weapons systems, including those associated with unmanned platforms.
Recommendation 8
8.34 The committee recommends that following the release of the Defence White Paper 2015 the Australian Defence Force review the adequacy of its existing policies in relation to autonomous weapon systems.50
US and UK policy
The US and United Kingdom (UK) are the only states that have developed national policy on AWS, both of which are publicly available.51 These policies include reference to an element of human control.
The US policy states that ‘[a]utonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.’52 At the CCW informal meeting of experts on AWS in April 2015, the US delegation described the framework of the US policy:
The framework establishes a deliberative approval process by senior officials, sets out the technical criteria that would need to be satisfied in order to develop autonomous weapon systems, and then assigns responsibility within our Defense Department for overseeing the development of autonomous weapons systems. The Directive imposes additional requirements beyond what is normally required during our weapons acquisition process. These additional requirements are designed to minimise the probability and consequences of failure in autonomous and semi-autonomous weapons systems that could lead to unintended engagements and ensure appropriate levels of human judgment over the use of force.53
The structure of the US policy injects some form of human judgement at different points throughout the process from weapons acquisition to the use of force. The US appears to adopt the view that these additional measures in its policy on AWS will enable it to comply with LOAC.
The UK considers the existing international law sufficient to regulate the use of AWS. While the US policy permits the autonomous release of weapons, the UK policy states that ‘the autonomous release of weapons’ will not be permitted and that ‘… operation of weapon systems will always be under human control’.54 The UK’s more conservative policy sees it ‘committed to using remotely piloted rather than highly automated systems as an absolute guarantee of oversight and authority for weapons release.’55
Given the common international law and military interests of the UK, US and Australia, it is likely that any policy or international position adopted by the Australian government will also include an explicit reference to some sort of human control or oversight. However, whether it will permit the autonomous release of weapons may depend on how Australia approaches the moral issue of whether a machine ought to be making decisions to kill a human being.56
Conclusion
Notwithstanding the argument that the existing legal framework is sufficient to regulate the use of AWS, the Australian government should actively participate in the discourse relating to AWS. The ADF and Defence industry also need to be engaged in order to shape any potential international regulatory regime that would serve to promote this nation’s future interests while ensuring compliance with international law.
Noting that many aspects of the discussions on the use of AWS remain ambiguous and unresolved, and that potential technological advancements will continue to be developed, Australia should be careful not to unintentionally bind itself to limitations on the use of AWS that are overly restrictive and stifle the advancement of technology. On the other hand,
Australia may need to balance this consideration against moving too far in the opposite direction, which could see malfunctioning robot armies equipped with the potential to autonomously decide to destroy cities.
Endnotes
- H. Gulam and S. Lee, ‘Uninhabited combat aerial vehicles and the law of armed conflict’ Australian Army Journal, Vol. III, No. 2 (winter 2006), pp. 125–26.
- ‘‘Killer robots’ pose risks and advantages for military use’, Kathleen Harris, CBC News, http://www.cbc.ca/news/politics/killer-robots-pose-risks-and-advantages…- use-1.3026963 (posted on 9 April 2015), accessed 8 September 2015.
- ‘Autonomous weapon systems: Technical, military, legal and humanitarian aspects’,
- Expert meeting, Geneva, Switzerland, 26–28 March 2014. Ibid, L. Righetti, ‘Civilian robotics and developments in autonomous systems’, Speaker’s summary, p25-27; K. Anderson, D. Reisner and M. Waxman, ‘Adapting the Law of Armed Conflict to Autonomous Weapon Systems’, International Law Studies, US Naval War College, Volume 90, p386-411, (2014)
- Supra note 3, p. 7.
- Senate Inquiry, ‘Use of unmanned air, maritime and land platforms by the Australian Defence Force’, Foreign Affairs, Defence and Trade References Committee, (June 2015), Recommendation 2.
- Ibid., Recommendation 7.
- US Department of Defense, Autonomy in Weapon Systems, Directive 3000.09 (21 November 2012), http://www.dtic.mil.whs/directives/pdf/300009p.pdf; N. Sharkey, ‘Towards a principle for the human supervisory control of robot weapons’, Politica & Societa, No 2, (May-August 2014).
- B. Docherty, Losing Humanity: The Case Against Killer Robots, Human Rights Watch (November 2012), p2, http://www.hrw.org/sites/default/files/reports/arms1112_ ForUpload.pdf.
- C. Heyns, Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, UN General Assembly, A/HRC/23/47, (9 April 2013), paragraph 38.
- ICRC, International Humanitarian Law and the challenges of contemporary armed conflicts, Official working document of the 31st International Conference of the Red Cross and Red Crescent (28 November - 1 December 2011), p39, http://www.icrc.org/ eng/assets/files/red-cross-crescent-movement/31st-international-conference/31-int- conference-ihl-challenges-report-11-5-1-2-en.pdf
- Supra note 3, pp. 63–64.
- Supra note 3.
- M. Horowitz and P. Scharre, ‘Meaningful Human Control in Weapon Systems: A Primer’, Center for a New American Security Working Paper, (March 2015).
- Memorandum from Article 36 for Delegates to the Convention on Certain Conventional Weapons (CCW), Article 36, ‘Structuring Debate on Autonomous Weapon Systems’, November 2013, at: http://www.artcle36.org/wp-content/uploads/2013/11/Autonomous- weapons-memo-for-CCW.pdf
- ICRC, International Humanitarian Law and the challenges of contemporary armed conflicts, Official working document of the 31st International Conference of the Red Cross and Red Crescent (28 November - 1 December 2011), p39, http://www.icrc.org/ eng/assets/files/red-cross-crescent-movement/31st-international-conference/31-int- conference-ihl-challenges-report-11-5-1-2-en.pdf.
- Supra note 3, pp. 87–90.
- Supra note 3, pp. 91–94.
- Supra note 3, pp. 77–86.
- Article 36, ‘Memorandum for delegates at the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS)’
- Supra note 14; M. Horowitz and P. Scharre, ‘Defining ‘Meaningful Human Control’ Over Autonomous Weapons’, The Pipeline, 19 March 2015, at: https://www.justsecurity. org/21244/defining-meaningful-human-control-autonmous-weapon-systems/, accessed 20 September 2015.
- Ibid.
- Supra note 14.
- Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts, Article 36.
- For example, Judge Advocate General, US Air Force, AFI51-402, Legal Reviews of Weapons and Cyber Capabilities 3.1.1, 3.1.2 (2011).
- Defence, Submission to the Senate Foreign Affairs, Defence and Trade References Committee Inquiry into the Use of Unmanned Platforms by the ADF, Submission 23.
- AP I, supra note 25, Article 48.
- As cited by Anderson, Reisner and Waxman, the Law of Armed Conflict to Autonomous Weapon Systems in pp. 386–411.
- AP I, supra note 25, articles 51(5)(b) and 57(2)(a)(iii).
- Supra note 29.
- AP I, supra note 25, article 58.
- M. Sassoli, ‘Autonomous Weapons and International Humanitarian Law: Advantages, Open Technical Questions and Legal Issues to be Clarified’, manuscript to be published in US Naval War College, International Law Studies, Vol. 90, 2014.
- W. Boothby, ‘How Far Will the Law Allow Unmanned Targeting to Go?’ in D. Saxon (ed.), International Humanitarian Law and the Changing Technology of War, The Netherlands: Martinus Nijhoff Publishers, (2013), p. 58.
- Supra note 29, p. 406.
- Senate Inquiry, ‘Use of unmanned air, maritime and land platforms by the Australian Defence Force’, Foreign Affairs, Defence and Trade References Committee, (June 2015), paragraph 8.30.
- Ibid.
- Department of Defence, Defence White Paper 2013, p. 19.
- J. Magnay, ‘Kevin Andrews vows greater investment to strengthen defence’, The Australian, (28 April 2015)
- R. Medcalf and J. Brown, ‘Defence challenges 2035: Securing Australia’s lifelines’, Lowy Institute for International Policy, (November 2014)
- ‘Cyber 2020 Vision: DSTO cyber science and technology plan’, Defence Science and Technology Organisation, Department of Defence, (6 May 2014).
- ‘Project Land 400’, Australian Army, http://www.army.gov.au/Our-future/Projects/Project- LAND-400, accessed on 21 September 2015; ‘Land 400 - The impossible dream? (Part II)’, Asia-Pacific Defence Reporter, Australian Defence in a Global Context, http://www. asiapacificdefencereporter.com/articles/97/LAND-400-the-impossible-dream-Part-II, (22 December 2010), accessed on 21 September 2015.
- Defence, Submission to the Senate Foreign Affairs, Defence and Trade References Committee Inquiry into the Use of Unmanned Platforms by the ADF, Submission 23.
- Ibid, pp. 43–48.
- Ibid.
- Ibid, paragraph 8.29.
- Ibid., paragraph 8.30.
- Ibid.
- Ibid., paragraph 8.31.
- Ibid., paragraph 8.32.
- Supra note 6.
- US Department of Defense, Autonomy in Weapon Systems, Directive 3000.09.
- Ibid.
- Michael Meier, US Delegation Opening Statement, Convention on Certain Conventional Weapons (CCW) Informal Meeting of Experts on Lethal Autonomous Weapons Systems, 13 April 2014, p. 2.
- UK Ministry of Defence, Written Evidence from the Ministry of Defence submitted to the House of Commons Defence Committee inquiry ‘Remote Control: Remotely Piloted Air Systems – current and future UK use’, September 2013, p. 3.
- Supra, note 3, p. 18.
- K. Anderson and M. Waxman, ‘Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can’, Hoover Institution, Stanford University, (2013); ‘Autonomous weapon systems: Technical, military, legal and humanitarian aspects’, Expert meeting, Geneva, Switzerland, 26–28 March 2014.