Skip to main content

Mission Command and Artificial Intelligence: Obstacles to Integration

1 July 2021
Promo image for Mission Command and AI

The integration of artificial intelligence (AI) and Mission Command is the most important challenge facing those tasked with building the ADF’s future Command and Control system.[1] For the Army in particular, getting this integration right is essential. The Australian Army has recognised this technological factor as a critical component of Australia's future land power, but AI is a tool with unique characteristics. Rather than just being a physical instrument, it is also an "intelligent" one, representing a new set of opportunities and challenges that encompass both the physical and cognitive space. Most significantly, given AI's potential impacts on the overall Command and Control process, the integration of AIs within the Army needs to go beyond physical workflows. For AIs to work for the Command and Control process and not against it, a conceptual coherency between AI and Mission Command needs to be established.

To build this coherency, one must first recognise the obstacles in integrating AI with the Mission Command philosophy’s core principles, namely understanding and trust. While significant attention is paid to competency and reliability, much less is paid to understanding and trust in the AI development and integration process.[2][3] The successful integration between AI and Mission Command can only occur if understanding and trust are also brought to the fore, and these obstacles need to be overcome through a united effort from both AI developers and Army field personnel.

The first obstacle is understanding, which comes in three parts. First, the AI must be able to interpret the superior commander's intent. Without this capability, the AI will cause a total breakdown of the Mission Command system. Second, the AI must be able to articulate its intent to its subordinates and colleagues, which includes both humans and other machines. This clarity of communication is essential to Mission Command’s model of decentralised execution, which enables faster and more relevant decision-making. Third, the AI must be able to accomplish the above tasks within an appropriate timeframe. A temporal dimension needs to be added to the pre-requisites of understanding within both interpretation and articulation. It is pointless for the AI to be in the loop if it is slower than the decision-making tempo. Whether acting as a decision aid, or a more autonomous agent within a loop, AI’s inability (in a timely fashion) to interpret and articulate the mission parameters, results required, or tasks undertaken can directly result in mission failure.

The above criteria are rather challenging to address. While humanity has made significant progress in the subfield of Natural Language Processing, there remain considerable challenges in making machines more cognitively capable. A fundamental challenge is to overcome the intricacies of human communication and linguistics, which commonly result in misinterpretations even among humans.[4] These misinterpretations can be amplified when the AI machines lack understanding of the underlying connotations that underpin human interaction. Ultimately, it would be unwise to rely on machines that wouldn’t pass the Turing Test. Named after its creator Alan Turing, the Turing Test is a method that determines whether an AI can demonstrate human-like intelligent behaviour. Despite the test’s various deficiencies, it represents a relevant benchmark for AI’s cognitive and communicative ability.

The second obstacle is trust. Given its emotive foundation and the necessity of effective leadership in its cultivation, trust most likely represents the most significant challenge AI needs to overcome to establish conceptual and organisational coherency under the Mission Command framework. In order for AIs to become digital enablers rather than disrupters of the Army’s Mission Command philosophy, they must meet the two following criteria. First, AI must be able to establish trust with its human teammates. Second, the level of trust must be appropriate given each teammate’s (including the AI) reliability and competency.[5] Given the inherent link between the concepts of trust and understanding, the AI needs to adequately understand each team member’s competency, reliability, weakness, their specific attributes and vice versa for appropriate levels of mutual trust to occur. There needs to be an additional emphasis on the word ‘appropriate’. If the level of trust is too high or too low, Mission Command’s philosophy will break down in practice, with potentially disastrous consequences.

The obstacle of trust also brings the topic into the important distinction between Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI). This distinction reflects a vast difference in both capacity and the type of trust that exists between - and within - human-machine teams. Today’s AI devices all fall into the ANI category, which includes the semi-autonomous weapon systems used by the Army. No matter the emotional attachment developed between the operator and the AI, human trust towards the ANI is relatively straightforward as it reflects the operator’s trust in the machine's competency and reliability.[6] However, when we examine this trust in reverse, it is perhaps more problematic. Given individual variations, it may be difficult for ANIs to adequately assess each subordinate member's competency, reliability, weaknesses, and attributes. While humans do possess similar difficulties in assigning appropriate levels of trust for their teammates, there is a lack of evidence that machines have exceeded the human capacity for leadership and empathy. While AGI's will likely display greater competency in this field and establish a more personal mutual trust with their teammates, they are still under development and are unlikely to become available in the near to medium term future. In addition, their potential “personhood” may represent massive societal ethical dilemmas. Therefore, AI’s greatest cognitive challenge remains the development of an appropriate level of understanding and trust for its human teammates, particularly given the limitations of ANI.

These issues of understanding and trust represent important obstacles to AI’s successful integration into the Command and Control process. In the case of Mission Command as an underlying philosophy, not only is AI not the silver bullet to the Army’s existing challenges but it is a new factor that complicates the issue. As the Australian Army pursues AI as enablers for Command and Control, it is crucial to be aware that AIs not only need to be competent and reliable, but also capable of understanding and trust. A unified concerted effort from both AI developers and Army field personnel is needed to overcome this problem. The solution to this cognitive challenge is unlikely to be a purely technological one, but one that blends technical expertise with conceptual understanding.


[1] Australian Defence Force, ADF Concept for Command and Control of the Future Force, DSN O1644248 (Canberra, ACT, 2019), 12-16. https://theforge.defence.gov.au/sites/default/files/adf_concept_for_com…

[2] Chris Field, “Connecting Good Soldiering and Mission Command,” The Cove, March 18, 2021.https://cove.army.gov.au/article/connecting-good-soldiering-and-mission…

[3] Chris Field, “Gaining and Maintaining Mutual Trust and Shared Understanding: 7 -Daily Actions in our Workplaces,” The Cove, August 10, 2020. https://cove.army.gov.au/article/gaining-and-maintaining-mutual-trust-a…

[4] Sebastian Ruder, “Tracking the Progress in Natural Language Processing,” Sebastian Ruder, 22, Jun 2018. https://ruder.io/tracking-progress-nlp/index.html

[5] “Andy J. Fawkes and Martin Menzel, “The Future Role of Artificial Intelligence Military opportunities and Challenges,” JAPCC Journal, no. 27(2018): 70-77

[6] Marlon W. Brown, “Developing Readiness to Trust Artificial Intelligence within Warfighting Teams,” Military Review 100, no. 1 (2020): 36-44

The views expressed in this article and subsequent comments are those of the author(s) and do not necessarily reflect the official policy or position of the Australian Army, the Department of Defence or the Australian Government.

Using the Contribute page you can either submit an article in response to this or register/login to make comments.