Skip to main content

Understanding how to scale and accelerate the adoption of RAS

Phase 1—Identifying Barriers

An Australian Army M113AS4 armoured logistics vehicle, fitted with optionally crewed combat vehicle technology and a remote weapon station, fires from a support-by-fire position during a human-machine team exercise at Puckapunyal Military Area.

Introduction

The emergence of increasingly capable robotic and autonomous systems (RAS) has been recognised as having the potential to become a disruptive wave, raising significant challenges as well as opportunities for militaries. Every major military force has now publicly declared an interest in either developing, utilising or banning RAS. Even world leaders, such as Russian President Vladamir Putin, have issued proclamations regarding its potential impact on the future of warfare. Yet, despite these claims, there have been no documented widescale deployments of weapon systems that one could unequivocally declare to be both robotic and functionally autonomous.

Here definitions are important. For the purposes of this analysis, the definition of RAS focuses on systems that are both robotic and functionally autonomous to the exclusion of remote-operated platforms and non- embodied artificial intelligence (AI) systems. ‘Autonomy’ refers to the capacity of a system to ‘execute a task, or tasks, without human input, using interactions of computer programming with the environment’. Thus an ‘autonomous system’ can be understood as a system that ‘whether hardware or software, once activated performs some task or function on its own’. However, ‘autonomy’ is a relative, rather than binary, characteristic derived from displayed functionality. It is therefore difficult to cleanly differentiate whether a system is truly ‘fully’ autonomous. As a result, it is common in the literature and in military strategic documents to refer to categories of autonomy along a spectrum. For example, the Australian Army RAS Strategy Autonomy Spectrum incorporates four levels of autonomy: remotely operated (which are often conflated with autonomous systems), automatic (where a human remains in the loop to monitor and potentially intervene), autonomic (where the human supervises or tasks a system, thus remaining in the decision loop), and autonomous (where the human starts the decision loop but the system can then act independently). Having considered these characteristics, as well as the core question it attempts to answer, this paper utilises the term ‘robotic and autonomous systems’ to refer to those systems that are both robotic and functionally autonomous— that is, those systems that would be classed as ‘autonomous’ in the Australian Defence Force (ADF) RAS Autonomy Spectrum.

That is not to say that relevant technologies are not proliferating; powerful AI–enabled tools, remote-operated systems, and even task-based autonomy have clearly all been deployed in some fashion by militaries, by law enforcement agencies and by armed non-state groups. The United States (US) Air Force, for example, has acquired and deployed remote-operated platforms at significant scale and sophistication over the past 20 years, representing 8.5 per cent of its total airframes in 2021. Such actors have employed systems that are capable of operating in an autonomous mode supervised by a human in the loop (such as the Super Aegis II), as well as defensive systems that autonomously engage threats but are subject theoretically (because of the speed of engagement) to being overridden by a human (for example, Patriot, Aegis and CIWS). More controversial are the loitering munitions (such as Harpy, Harop and Shahed-136) which are capable of independently selecting and engaging targets based on matching signatures to a pre-established database. Such systems (which would also include certain cruise missiles such as the Brimstone) have been described in the media as lethal autonomous weapons systems, while debate in the legal and scholarly communities is ongoing.

And yet, something is still blocking states as powerful and wealthy as the US and China from accelerating fully autonomous robotic systems from concept and prototype to a deployable and scalable capability. Even in the Russo- Ukrainian war, the largest land campaign in Europe since World War II, deployment of such systems has been limited to remote-operated platforms. This has included deployment of loitering munitions such as Switchblade and Shahed-136. But the utilisation of such platforms is not new and they are, at best, extensions of the operational concepts utilised over two decades ago. In 2001, for example, the US utilised remote-controlled aircraft for direct strikes, and the use of such systems as a ‘poor man’s air force’ was demonstrated by ISIS as early as 2015. Therefore, in the context of the current conflict in Ukraine, the question arises as to why the Russian military has elected to deploy long-retired tank models from storage in order to continue the fight rather than to use its much-vaunted Uran series of uncrewed armed ground vehicles. Why hasn’t the uncrewed version of the T-14 Armata tank moved beyond prototype stage? More broadly, why hasn’t the scale of conflict in the Ukraine reached the ‘demonstration point’ for the use of fully autonomous weapons systems?

To answer this question, it is necessary to go beyond the claims that too little has been invested, that the technology is not yet matured, or that militaries have not recognised the potential value of autonomous systems. On the former, consider that the US invested more in AI-related research in 2018 than Indonesia’s entire defence budget that year. While exact figures are unavailable, China is estimated to be spending comparable amounts on related research. Claims that the technological barriers are insurmountable also seem weak against the continued proliferation of commercial AI tools. Neither can a serious argument be made that these states are not sufficiently focused on autonomous systems, with China famously designating them as central to the rise of ‘intelligentized warfare’ and the pride of place they enjoy in the US Third Offset Strategy.

Thus, it is not the lack of scale, scope and resource capacity that is responsible for the failure to adopt an innovation; rather it is the underestimation of the complexities of the system—barriers—into which the innovation is to be adopted. This situation is made more problematic by the fact that experimentation ‘on the bleeding edge’ requires an acceptance of the risk that many projects will not reach maturity or will not be accepted into service, sometimes for good technical or operational reasons. Achieving this complex balance has been extensively studied in both the civilian and military literature and it remains a challenge for the ADF.

The announcement by the Department of Defence of the Advanced Strategic Capabilities Accelerator (ASCA) and the release of the Defence Strategic Review (DSR) are indicative of a recognition of this challenge among senior civilian and military decision-makers. The centrality of addressing this challenge was made clear by the fact that trusted autonomy is one of ASCA’s initial six priority areas. Similarly, that a chapter of the DSR is dedicated to the need for the ADF to generate asymmetric advantage through technology indicates that this is a challenge at the core of Defence thinking and planning moving forward. The pre-eminence of ‘AUKUS Pillar Two’30 in both AI and autonomous systems is demonstrative that both will continue to be capabilities of significance to the future ADF.

Attachment Size
occasional_paper_20.pdf (2.66 MB) 2.66 MB