People, Culture and Ethics (Spotlight Brief 7/21)
The content in this article is an extract of Spotlight Brief 7/21.
Accountability and Control of Autonomous Weapon Systems: A Framework for Comprehensive Human Oversight
Minds and Machines – Aug 2020
The emergence of Autonomous Weapon Systems has led to increased academic and societal interest in the concepts of accountability and responsibility. Considerable focus has been on accountability gap that will facilitate a lack of criminal liability and moral responsibility. The authors of this paper attempt to remedy this issue. They begin by proposing an understanding of ‘accountability’ that allows operationalisation as a verifiable requirement for practical use. They then identify three potential accountability gaps that Autonomous Weapon Systems may engender. The authors present the principle of ‘Meaningful Human Control’ to ensure human accountability for the decision to use force but propose that to broaden this principle to a framework for ‘Comprehensive Human Oversight.’ Drawing from engineering, socio-technical, and governance perspectives of ‘control,’ the authors’ ‘Comprehensive Human Oversight’ framework provides a mechanism to ensure solid controllability and accountability for the behaviour of Autonomous Weapon Systems. While at first glance this article may appear highly technical and semantic, in practical terms, it outlines an instrument and process for ensuring oversight before, during, and after the deployment of Autonomous Weapon Systems.
‘A French Opinion on the Ethics of Autonomous Weapons’, War on the Rocks, 02 Jun 21
‘NATO Tees Up Negotiations on Artificial Intelligence in Weapons’, C4ISRNet, 28 Apr 21
‘Artificial Intelligence and War Without Humans’, Asia Times, 23 Apr 21
‘Regulating Military AI Will Be Difficult. Here’s a Way Forward’, The Bulletin, 03 Mar 21
‘US has ‘Moral Imperative’ to Pursue AI Weapons, Panel Says’, Engineering and Technology, 27 Jan 21
Challenges in Regulating Lethal Autonomous Weapons under International Law
Southwestern Journal of International Law – Mar 2021
Many states have called for binding international legal restrictions in the development, procurement, and use of lethal autonomous weapons. This paper outlines the challenges in creating such a regulatory scheme. A key limitation for this proposal is the lack of an agreed upon definition for ‘autonomous weapons’. Another is the substantial gulf between individual nation’s perspectives concerning the content of the laws. At least twenty-six States have called for a total ban on fully autonomous lethal weapons. Other States have asserted that the existing international laws of armed conflict are sufficient to govern the development and use of autonomous weapons. The United States, an obvious major player, advocates autonomous weapons could enhance conformity to the existing laws of war by increasing targeting precision and discrimination. This article considers historical trends concerning the traits of weapons that have been banned, finding that attempts to regulate the use of autonomous weapons will likely prove unsuccessful.
‘Changing the Conversation: The ICRC’s New Stance on Autonomous Weapon Systems’, Lawfire, 24 May 21
‘International Discussions Concerning Lethal Autonomous Weapon Systems’, Congressional Research Service, 19 Apr 21
‘We Need to Restart Talks on Regulating Autonomous Weapons – Now’, The Ploughshares Monitor, 25 Mar 21
‘US Govt Panel: Don’t Ban AI-Powered Autonomous Weapons’, The Defense Post, 28 Jan 21
‘Guidelines for Military and Non-Military Use of Artificial Intelligence’, European Parliament, 20 Jan 21
In Search of the ‘Human Element’: International Debates on Regulating Autonomous Weapons Systems
The International Spectator – Feb 2021
The development of Autonomous Weapon Systems could create ethical and legal issues for the Army. Concerns about these weapons include accountability gaps, criminal liability, the potential to undermine violate human dignity by reducing targets to algorithmically processed ‘data points.’ In this work, Daniele Amoroso and Guglielmo Tamburrini propose a solution. They note that the international community has sought to avert these problems by preserving a human element when force is applied. These resolutions that fall under the umbrella of ‘Meaningful Human Control’ across three categories – boxed autonomy, denied autonomy, and supervised autonomy. Amoroso and Tamburrini highlight defects with these policies, and instead advance their own novel, differentiated approach to Meaningful Human Control. This differentiated approach stipulates that every individual type of Autonomous Weapon System requires its own individualised formula governing its ethical and legal deployment. This remedy seems appropriate for the Army given the wide variety of capabilities that could conceivably be labelled ‘Autonomous Weapon Systems.’
‘Why We Should be Alert and Alarmed about Autonomous Weapons Systems’, The Canberra Times, 11 Jun 21
‘The Fog of War May Confound Weapons that Think for Themselves’, The Economist, 29 May 21
‘Artificial Intelligence, Weapons Systems and Human Control’, E-International Relations, 16 Feb 21
‘Drone Swarms Are Getting Too Fast for Humans to Fight, U.S. General Warns’, Forbes, 27 Jan 21
Coupling Level of Abstraction in Understanding Meaningful Human Control of Autonomous Weapons: A Two-Tiered Approach
Ethics and Information Technology – Apr 2021
Steven Umbrello argues that Autonomous Weapon Systems will not create an ethical accountability gap. He believes that those who claim that autonomy will generate this issue misunderstand the nature of military operations and engineering design. In the former’s case, Autonomous Weapon Systems, and indeed any human agent in the military, are already constrained by various decisions and planning that occurs before and during the deployment. In the latter case, engineers already have a moral duty to ensure that Autonomous Weapon Systems are fully responsive to their orders. The practical implication of these ‘two levels of abstraction’ ensure that a human will always be accountable for any action taken. Suppose an Autonomous Weapon System kills a civilian. Either the killing is deliberately ordered as part of the military operation, in which case the decision-maker is accountable, or it was contrary to orders, in which case the engineers that designed the weapon are accountable. Umbrello concludes that arguments “against the development of Autonomous Weapon Systems, such as increased autonomy or the targeting of civilians, are only problematic if decoupled from responsible design, actually military planning, and actual operations practices. The challenge for Army in adopting this model of thinking lies in the second part. We obviously have significant control over military operations, but over the past three to four decades, we have actively steered away from being involved in design or manufacturing decisions.
‘The Pentagon Inches Toward Letting AI Control Weapons’, Wired, 10 May 21
‘Accountable Autonomy: When Machines Kill’, Observer Research Foundation, 07 Apr 21
‘Rise of the Robots: Weaponization of Artificial Intelligence’, National Maritime Foundation, 01 Apr 21
‘Are We Ready For Weapons to Have a Mind of Their Own?’, The Centre for Public Integrity, 17 Feb 21
‘Command Responsibility: A Model for Defining Meaningful Human Control’, Journal of National Security Law & Policy, 02 Feb 21
On the Indignity of Killer Robots
Ethics and Information Technology – Mar 2021
All military personnel have an interest in maintaining their dignity in combat. Some theorists have claimed that death at the hands of Autonomous Weapon Systems would be undignified. Garry Young offers two rebuttals to this ‘indignity argument.’ His weaker rebuttal accepts that deploying Autonomous Weapon Systems to kill an enemy combatant involves an affront to their dignity and that violating someone’s dignity is morally wrong. Despite this, Young contends that a moral good can override such a wrong. As such, it can be permissible to deploy these weapons if the affront to dignity outweighs the reduced suffering to achieve a military objective. Young’s response that is more robust denies that Autonomous Weapon Systems violate the dignity of combatants targeted. Whilst the weapons themselves may be incapable of respecting the inherent dignity of combatants targeted, the military commanders who deploy these weapons can. Further, Young suggests that human beings may have an unconditional inalienable dignity and that combatants can maintain their dignity even in the face of indignity.
‘AI Ethics Have Consequences – Learning from the Problems of Autonomous Weapons Systems’, Diginomica, 05 Jul 21
‘The Problem with ‘Moral Machines’, ABC, 28 Mar 21
‘Autonomous Weapons and the Laws of War’, Valdai, 09 Feb 21
‘War Machines: Can AI For War Be Ethical’, The Cove, 29 Jan 21
‘Opposing Inherent Immorality in Autonomous Weapons Systems’, The Forge, 01 Jan 21
The views expressed in this article and subsequent comments are those of the author(s) and do not necessarily reflect the official policy or position of the Australian Army, the Department of Defence or the Australian Government.
Using the Contribute page you can either submit an article in response to this or register/login to make comments.