Skip to main content

To Automate or Not Automate? That is the Question

To Automate or Not Automate? That is the Question

In a few years robots will move so fast you will need a strobe light to see them’

Robots and Autonomous Systems (RAS) seem set to transform the way we live, but while military drones have been a key feature of modern conflict for some time, they appear poised to generate more advanced capability in context of developments in artificial intelligence (AI) and machine learning (ML), which is attracting increasing investment. We are now witnessing early attempts to merge RAS technologies with AI and ML software to enhance combat effectiveness, so the reality of independent or intelligent RAS (iRAS) warfighting teams operating with humans could be just over the technology horizon. Recent examples include Russian efforts to develop autonomous ground combat vehicles and US Army trials of manned-unmanned teaming, during which a single tank crew controlled several drones during a tactical exercise[i]. It is therefore expected that Yesterdays Outdated Requirements, Ideas and Concepts (YORIC) will be challenged by this seismic technology shift[ii].

Army decision-makers and modernisation planners will have known YORIC well, with familiar examples such as towed artillery, crewed armoured vehicle turrets, piloted helicopters, engineer obstacle breaching or infantry support weapon platoons. While it remains to be seen how rapidly iRAS matures, the indicators and warnings suggest that combat machines capable of reasoning and independent manoeuvre might be optimised to achieve those familiar examples. Moreover, iRAS could facilitate greater efficiency for lower cost than human operated combat systems[iii]. It follows then that maintaining a business as usual approach could eventually realise the slings and arrows of outrageous fortune. Hence the steps already made to observe this technology phenomenon are timely, as preparing for a resilient autonomous future may soon have a level of urgency; particularly in the context of parallel cyber developments of supercomputer and ML enabled hacking. These growing cyber security risks were illuminated in earlier posts, so won’t be explained further other than to propose that trust guarantees will be paramount if Army seeks to operate intelligent combat systems.

If capability choices are made in the future to acquire iRAS, it will be prudent to avoid a piecemeal approach[iv]. Stove-piped acquisition of single capabilities might create gaps that adversaries could exploit or generate interoperability complications. Therefore, iRAS could be heuristically assessed to discover automation options across the Land Combat System. However, the promise of smart autonomous systems is compelling and just because a military function can be automated does not mean it should be[v]. There will also be stakeholder influences advocating to employ smart technologies for the application of force, so caution is warranted to avoid missteps. Moreover, in context of tight budgets and personnel caps the temptation to generate mass with machines may make for convincing business cases, so the Defence Cooperative Research Centre (DCRC) in Trusted Autonomous Systems may have value informing a ‘tooth to tail assessment’ of what functions could safely be automated.

It’s now plausible that implications of iRAS for how Army’s future force is structured and fights as part of the joint force may be significant and could herald Beersheba-like strategic change. Consequently, if mature iRAS combat and logistic capabilities are introduced it will also require major cultural shifts due to historical corps predispositions. But to gain an insight into what a smart machine future might look like, the US Army RAS Strategy provides useful hints, including the aspiration for autonomous systems to be fully integrated across their entire force from 2031. So how far should the Australian Army progress with iRAS or counter-iRAS plans? The DCRC may enlighten this dilemma, but the slippery slope to full-automation will be vexing to negotiate with tough capability decisions ahead. Therefore, investigation of iRAS risks and opportunities might help inform the ‘next step’ beyond Plan Beersheba.


[i] This trial is a glimpse of what is to come, however, this manned-unmanned team (MUM-T) configuration may actually create greater risk to the humans, as a single human command node with no redundancy becomes the MUM-T centre of gravity and will be targeted by an adversary seeking to disrupt the entire team.

[ii] The reference to ‘Yesterdays Outdated Requirements, Ideas and Concepts’ is in context of the future when iRAS capabilities might be proven reliable, so not necessarily ‘outdated’ in the current force.

[iii] Teslas autonomous vehicle autopilot is ten times faster than human driver reaction times.

[iv] It might be that progressing iRAS acquisition presents unacceptable risks to Government if autonomous trust guarantees cannot be assured. An alternative outcome may see the focus of capability development on counter-iRAS capabilities, including acquisition of RAS without independent/reasoning AI and ML functionality.

[v] There may be compelling reasons to leave a particular military function directly in human hands.

The views expressed in this article and subsequent comments are those of the author(s) and do not necessarily reflect the official policy or position of the Australian Army, the Department of Defence or the Australian Government.

Using the Contribute page you can either submit an article in response to this or register/login to make comments.