Skip to main content

Can the Robots Save the City?

A Wargame Experiment

Knitted Hat Lying among Debris in Ukrainian City. Image source:

Arguably the biggest challenge for law-and-ethics-abiding military forces engaged in urban warfare is their current reliance on standoff firepower to reduce their own casualties while allowing progress against adversaries who are concealed and protected. As is the case in operations conducted on rural terrain, this tactic is essential to overcome defensive urban advantages. However, it is less palatable or even counterproductive in cities because of the associated civilian suffering. The employment of precision weapons merely reduces the destruction.

This challenge only looks likely to intensify and raises a powerful question: can military operations on urban terrain still succeed if civil society is successful in its efforts to secure a prohibition on the use of explosive weapons in cities or urban areas? (Explosive Weapons: Declaration to Curb Civilian Harm | Human Rights Watch ( One possibility is that some or all of the standoff engagement capability provided by explosive weapons (such as artillery and aerial bombing) might be replaced by autonomous weapons systems. The potential capability of such systems to operate within and around urban structures – rather than destroying them – while keeping human operators at a safe distance, seems to show significant promise as a means to fight in the city without destroying the city (urbicide).

Autonomous weapons are, however, controversial. Opponents are concerned about a) the ability of these systems to operate within the parameters of international humanitarian law; b) the issue of accountability for strikes carried out by autonomous weapons; c) the risk of escalation and the lowering of the threshold for armed conflict; and d) the idea that allowing an autonomous weapon to kill a human being is a fundamental violation of human dignity.

There is, clearly, a central ethical tension present in the controversy about autonomous weapons. On the one hand, lethal autonomous weapons have the potential to reduce the destructiveness and high proportion of non-combatant deaths that are currently features of urban warfare. On the other hand, there are ethical objections to the use of lethal autonomous weapons in general. If (as we believe) the ethics of war is an ongoing negotiation between recognising the necessity of war in appropriate circumstances and minimising its destructiveness, then we cannot appropriately weigh this equation without having a reasonable sense of just what operational impact lethal autonomous weapons are likely to have in urban warfare.

To address this challenge, the University of New South Wales’ Canberra Future Operations Research Group, led by Associate Professor Deane-Peter Baker and Professor David Kilcullen, conducted an experiment to consider the hitherto unexplored operational impact of ethical constraints on lethal autonomous weapons (LAWS). The team employed a matrix wargaming methodology using a series of scenarios set in 2035 in which hypothetical ADF capabilities and rules of engagement varied when fighting a very capable ‘enemy’ force with LAWS. The context of the experiment is the normative arguments for banning or tightly regulating lethal autonomous weapons. The wargame was followed up with a round of live experimentation .

Key Findings

Among the key findings of the experiment were:

  • Graduated Autonomy. The findings suggest that future forces will prefer semi-autonomous (‘man in the loop’) systems wherever circumstances allow, however there will also be circumstances, particularly in high intensity combat operations, where fully autonomous systems will be extremely valuable.
  • Autonomy Impact on Target Value. It was noted throughout the vignettes that the higher value targets (from a ‘Red’ perspective) were those that had an identified human element. This emphasises that while autonomous systems are ideally placed to undertake tasks seen as ‘dangerous and or dirty’ the corollary is that, as they become more prolific, the target value of manned systems and nodes will increase.
  • Impact of ROE and SPINS.  ‘Blue’ noted clearly that the vignettes featuring more constrained rules of engagement and special instructions – designed to reflect possible future constraints on the use of force in densely populated urban areas – did affect their planning, most notably in their employment of indirect fires. The availability of Robotic and Autonomous Systems clearly provided alternative means to achieving effects that might otherwise have been achieved by indirect fires. 
  • Truth and Trust.  A key factor in the employment of Artificial Intelligence and Autonomous Systems was the ability, or otherwise, to trust that system to execute a task critical to the operational plan.

Conclusions and Recommendation

This brief summary cannot do justice to the full findings of the experiment – readers are encouraged to view the full report. While no single set of experimentation of this kind can be considered conclusive, our preliminary results are telling.  Specifically, they strongly suggest that, while military forces with a strong tradition of ethical and legal control over military action are likely to have a preference for remotely piloted and semi-autonomous systems in lower intensity conflict (i.e. conditions that allow the luxury of ‘human-on-the-loop' decision-making), fully autonomous weapon systems nonetheless offer significant operational advantages in high intensity warfighting. This operational advantage, employed correctly, also has ethical weight – it can result in reduced risk to soldiers and, importantly, to non-combatants. When it comes to the employment of lethal autonomous systems on the battlefield, the key to balancing the ethical scales will be ensuring that trust in these systems is matched to their trustworthiness, which will vary according to the operational context and the state of the technology. To achieve this balance in practice we recommend the development of a concept of operations for the operational employment of lethal autonomous systems.

(This article draws on Jenna Allen and Deane-Peter Baker 2023 ‘Can the Robots Save the City? Ethics, Urban Warfare, and Autonomous Weapons’ in Dragan Stanar and Kristina Tonn (eds.) The Ethics of Urban Warfare: City and War (Leiden: Brill))

The views expressed in this article and subsequent comments are those of the author(s) and do not necessarily reflect the official policy or position of the Australian Army, the Department of Defence or the Australian Government.

Using the Contribute page you can either submit an article in response to this or register/login to make comments.