Militaries worldwide are embracing machines with varying degrees of automation and autonomy.[1] However, ensuring that machines pioneered in industry and academia meet the needs and constraints of military users remains challenging. One core challenge is that machines are frequently conceived and developed as standalone systems, whilst their military use increasingly entails operation in human-machine teams. To be effective in human-machine teaming (HMT), machines must be capable of recognising and accounting for the differing intent, goals, capabilities, limitations, and situational awareness of teammates, adversaries, and commanders. game theory, of the “serious” or mathematical kind, appears fundamental to generating such a capability.[2]
Teeming Teams
HMT is one of the leading means of employing machines for military use because it seeks to leverage the strengths of both humans and machines.[3] The widespread use of first-person view (FPV) drones in the recent conflict in Ukraine has illustrated the disproportionate potential of even rudimentary HMT. Indeed, FPV drones capitalise on the human strength of visual navigation, and the machine strengths of speed, manoeuvrability, and target homing.[4] HMT is now part of the plans to adopt robotic and autonomous systems for military use in the United Kingdom, the United States, and Canada.[5] All three branches of the Australian Defence Force (ADF)—Army, Navy, and Air Force—have also embraced HMT.[6] For Army, HMT is expected to enhance range and lethality, enabling small combat teams to generate asymmetric advantage and to create secondary benefits including force protection, increased mass and scale, and more rapid decision-making.[7]
Militaries that have significant resources, and experience in the development of automation and artificial intelligence (AI) technologies, are expected to be most successful in pursuing HMT.[8] These militaries are specifically anticipated to gain asymmetric advantage, both in deploying their own machines, and in countering those deployed by adversaries.[9] Conversely, militaries that fail to implement effective HMT will be at a significant disadvantage in future conflicts, leaving them prone to incurring substantially more (human) casualties, and being potentially unable to contend effectively on the modern battlefield.[10]
Perceptive and Collaborative Machines
HMT involves integrating humans and machines as interdependent teammates that coordinate and collaborate to achieve common goals.[11] It differs from classic approaches to integrating humans and machines in that it shares tasks that were previously only performed by humans with machines.[12] For example, whilst machines such as artillery have a long history of use as “tools” in Army, up until now they have not had the potential to independently collect, process, and act on information.
For Army, HMT entails transforming machines to be capable of coordinating, collaborating, and acting to achieve command intent under conditions of uncertainty and in contested environments.[13] It also entails transforming machines to be capable of exercising control subject to tasking by command (which remains a fundamentally human function).[14] Control, being concerned with “coordinating forces towards outcomes determined by Command”, requires objective, empirical, and timely situational understanding.[15] For machines to exercise control, they must therefore be capable of gaining situational understanding, including inferring and reasoning about uncertain and potentially conflicting states, goals, intentions, capabilities, limitations, and experiences of teammates and adversaries.[16]
New AI and automation technologies are needed to develop the situational-awareness and coordination capabilities required by machines to support effective HMT within Army. Game theory is emerging as crucial to creating and adopting these technologies due to its focus on the design and analysis of optimal interactions between goal-driven agents in uncertain, dynamic, and contested environments.[17]
Avoiding the Robinson Crusoe Fallacy (but not Clausewitz)
Most existing automation and AI technologies that are available for equipping machines with teaming capabilities draw on limited, if any, game theory.[18] They are instead built on the (implicit) assumption that machines act in relative isolation from humans. As such, effects that are not a direct consequences of a machine’s own actions are assumed to be artefacts of nature.[19] Those technologies that do consider multiple humans or machines typically employ limited game theory by assuming (explicitly or implicitly) a high degree of structure in interactions, available information, and the environment.
Perhaps surprisingly, game theory has had only a minor role in the development of celebrated AI systems like OpenAI Five, AlphaStar, and MuZero that support boardgames and computer games such as Chess, Go, Dota 2, and Starcraft II.[20] Game theory has played a more significant role in the development of subsequent AI systems such as Pluribus for Poker, CICERO for Diplomacy, and those developed under various DARPA and industry challenges.[21] However, the existing AI systems on boardgames and computer games inherently ties them to simplistic environments that evolve periodically with fixed rules and a known number of players. Furthermore, in these environments communication between teammates is easily achieved and there is little variation in the range of possible actions available to players. Consequently, existing systems and technologies are limited in their ability to recognise and account for the differing intent, goals, capabilities, limitations, and situational awareness of teammates, adversaries, and commanders in environments that are uncertain, dynamic, and contested.
Developing HMT with existing automation and AI systems and technologies therefore risks committing the so-called ‘Robinson Crusoe fallacy.’ This is the error of attributing the deliberate plans, actions, and effects of teammates and/or adversaries to the randomness of nature.[22] Committing this fallacy will, at best, result in inefficient or suboptimal HMT implementations. At worst, it will lead to HMT implementations that can be effectively exploited or countered by adversaries. For example, if all soldiers undertaking a foot patrol in a human-machine team suddenly stop or turn, the machines should recognise this “random” event as a deliberate action potentially explicable as the humans perceiving an adversary with a sensing modality that the machines lack (e.g., acoustic sensors). If the machines commit the Robinson Crusoe fallacy by treating this event as purely random and continue their original tasking, at best the mistake will be corrected quickly. At worst, the team is placed at a disadvantage, and the adversary has found an exploitable weakness.
To invoke Clausewitz, “…war is not an exercise of the will directed at inanimate matter...the will is directed at an animate object that reacts”. [23] HMT introduces new animate objects, both friend and foe; it must therefore explicitly recognise and account for them.
Game (Theory) Time
Frameworks for HMT built around game theory have recently begun to appear in the open literature. [24] Within these frameworks, all team members (human and machine) are treated as interacting but independent agents with goals, intentions, capabilities, limitations, experiences, and physical and cognitive states. This approach overcomes the Robinson Crusoe fallacy by emphasising the achievement of new game-theoretic situational awareness and collaboration technologies for machines (and training for humans).
Game-theoretic situational-awareness technologies are intended to enable machines to orientate themselves, both physically and cognitively, with reference to their team.[25] To do so, they compute and maintain a variety of hypotheses about the state of the environment and the physical and cognitive states of other humans and machines including their positions, goals, intentions, capabilities, limitations, and experiences.[26] To achieve this, they may draw data from sensors and communication systems, as well as from advanced systems capable of inferring the physical and cognitive states and intentions of agents by observing their actions.[27] Proposed architectures for these technologies also emphasise the importance of integrating computational and physics-based models of machines with cognitive models of human behaviour.[28]
Game-theoretic collaboration technologies are intended to allow machines to select actions that achieve goals in a manner that aids and aligns with teammates.[29] For example, consider a situation in which soldiers teamed with an aerial drone are tasked with navigating an outdoor environment. The drone has a better overview of the environment but cannot see under tree cover, whilst the soldiers on the ground have limited but detailed fields of view. Game-theoretic HMT would involve the drone following, as well as predicting, the paths and fields of view of the soldiers to proactively share imagery to help guide them. Game-theoretic collaboration thus involves machines computing not just their own actions, but understanding those that are best for others.[30] Being grounded in game theory, there is a rich collection of analytical and computational tools from which to develop these technologies so that they are (provably) optimal and/or robust.[31]
Importantly for Army, the game theory underlying recent game-theoretic approaches to HMT extends beyond cooperative interactions within teams, to competitive interactions with adversaries, and to interactions between leaders and subordinates. [32] This development opens significant opportunities to forge new game-theoretic frameworks and technologies for HMT that are compatible with existing and future command and control structures and doctrine. It further raises the prospect of equipping machines with technologies that mimic conventional doctrinal approaches. This may include the capacity to consider an adversary’s most likely and most dangerous courses of action, and to recognise when an adversary may be acting to bluff or deceive.
Finally, given the range of analytical, computational, and simulation tools already developed in game theory across engineering, economics, and AI, game-theoretic HMT frameworks offer an immediate opportunity to investigate, analyse, and wargame important “What If?” questions of HMT and counter HMT. For example, different approaches to employing or countering HMT can be immediately investigated to help commanders understand how to effectively employ HMT, and how to train soldiers for interactions with machine teammates and adversaries.
Conclusion
Human-machine teaming is a core element of plans to modernise the Army and the ADF. Effective HMT requires machines (and humans) to be capable of explicitly recognising and accounting for the intent, goals, capabilities, limitations, and differing situational awareness of teammates, adversaries, and commanders. HMT grounded in game theory has begun to appear in the open literature as a means of achieving this capability. Army should seek to investigate and employ game-theoretic HMT for combat environments that are uncertain, dynamic, and contested, involving teammates, commanders, subordinates, and adversaries. Since game theory represents a niche capability, the Army and the ADF will need to work with academia and industry to identify sovereign capability in game theory, and related fields, to develop effective HMT for the battlefield.
About the Author
Dr Timothy Molloy is a senior lecturer in the School of Engineering at the Australian National University. He is currently an Australian Army Research Centre (AARC) Fellow investigating game theory for human-machine teaming. He was previously a postdoctoral research fellow at the University of Melbourne on the joint Australia-US multidisciplinary university research intuitive project Neuro-Autonomy: Neuroscience-Inspired Perception, Navigation, and Spatial Awareness for Autonomous Robots, and prior to that he was an Advance Queensland research fellow at the Queensland University of Technology.
[1] P.W. Singer, Wired for War: The Robotics Revolution and Conflict in the 21st Century (Penguin, 2009); Mick Ryan, War Transformed: The Future of Twenty-First-Century Great Power Competition and Conflict (Naval Institute Press, 2022); Ash Rossiter and Peter Layton, Warfare in the Robotics Age (Lynne Rienner, 2024); Jean-Marc Rickli, Federico Mantellassi, and Quentin Ladetto, What, Why and When? A Review of the Key Issues in the Development and Deployment of Military Human-Machine Teams (Geneva Centre for Security Policy, 2024).
[2] Krishnamoorthy Kalyanam et al., “Optimal Human–Machine Teaming for a Sequential Inspection Operation,” IEEE Transactions on Human-Machine Systems 46, no. 4 (2016): 557–68; Mengyao Li and John D Lee, “Modeling Goal Alignment in Human-AI Teaming: A Dynamic Game Theory Approach,” 66, no. 1 (2022): 1538–42; Werner Damm et al., “A Reference Architecture of Human Cyber-Physical Systems – Part III: Semantic Foundations,” ACM Transactions on Cyber-Physical Systems (New York, NY, USA) 8, no. 1 (2024).
[3] Mick Ryan, Man-Machine Teaming for Future Ground Forces (Center for Strategic and Budgetary Assessments, 2018); Tate Nurkin and Julia Siegel, Battlefield Applications for Human-Machine Teaming: Demonstrating Value, Experimenting with New Capabilities and Accelerating Adoption (Atlantic Council, Scowcroft Center for Strategy and Security, 2023); Rickli, Mantellassi, and Ladetto, What, Why and When?; Sidharth Kaushal et al., Leveraging Human– Machine Teaming (London, UK: Royal United Services Institute for Defence and Security Studies, 2024).
[4] Kateryna Stepanenko, The Battlefield AI Revolution Is Not Here Yet: The Status of Current Russian and Ukrainian AI Drone Efforts (Institute for the Study of War, 2025), 1–3 https://understandingwar.org/backgrounder/battlefield-ai-revolution-not….
[5] “British Army Approach to Robotics and Autonomous Systems,” UK Ministry of Defence, 2022, https://www.army.mod.uk/media/15790/20220126_army-approach-to-ras_final…; Chris Gordon, “USAF Leaders See ‘Human-Machine Teams’—Not Robots—as Future of Airpower,” Air & Space Forces Magazine [Online], December 15, 2024, https://www.airandspaceforces.com/usaf-leaders-see-human-machine-teams-…; Madison Cameron et al., Human-Machine Teaming Research Roadmap for Future Autonomous System Integration, DND-1144.1.20-01 (Defence Research and Development Canada (DRDC), 2024), https://cradpdf.drdc-rddc.gc.ca/PDFS/unc481/p818476_A1b.pdf.
[6] Royal Australian Navy, RAS-AI Strategy 2040: Warfare Innovation Navy (Sea Power Centre Australia, 2020); Australian Army, Robotic & Autonomous Systems Strategy v2.0 (Commonwealth of Australia, 2022), https://researchcentre.army.gov.au/sites/default/files/Robotic%20and%20…; Royal Australian Airforce, HACSTRAT - A Strategic Approach for Air and Space Capability (Commonwealth of Australia, 2021), https://www.airforce.gov.au/sites/default/files/2022-09/hacstrat_full_v….
[7] Australian Army, Robotics and Autonomous Systems, 13
[8] Rossiter and Layton, Warfare in the Robotics Age, 16–20; Kaushal et al., Leveraging Human– Machine Teaming, 7.
[9] Kaushal et al., Leveraging Human– Machine Teaming,7–8; Rickli, Mantellassi, and Ladetto, What, Why and When?, 6; Nurkin and Siegel, Battlefield Applications, 1.
[10] Kaushal et al., Leveraging Human– Machine Teaming, 7.
[11] Matthew Johnson and Alonso Vera, “No AI Is an Island: The Case for Teaming Intelligence,” AI Magazine 40, no. 1 (2019): 16–28; Joseph B Lyons et al., “Human–Autonomy Teaming: Definitions, Debates, and Directions,” Frontiers in Psychology 12, no. 12 (2021): 5–8.
[12] Australian Defence Force, ADF Concept for Command and Control of the Future Force (Commonwealth of Australia, 2019), 18; Alex Neads, David J. Galbreath, and Theo Farrell, From Tools to Teammates: Human-Machine Teaming and the Future of Command and Control in the Australian Army (Australian Army Research Centre, 2021), 2–3.
[13] ADF, ADF Concept for Command and Control, 17–18.
[14] ADF, ADF Concept for Command and Control, 17–18; Neads, Galbreath, and Farrell, From Tools to Teammates, 19–35.
[15] ADF, ADF Concept for Command and Control, 18.
[16] ADF, ADF Concept for Command and Control, 23–27.
[17] Damm et al., “A Reference Architecture of Human Cyber-Physical Systems - Part III," 2–4.
[18] Robert W. Andrews et al., “The Role of Shared Mental Models in Human-AI Teams: A Theoretical Review,” Theoretical Issues in Ergonomics Science 24, no. 2 (2023): 155–157.
[19] Andrews et al., "The Role of Shared Mental Models", 155–157; Klaus Bengler et al., “A References Architecture for Human Cyber Physical Systems, Part II: Fundamental Design Principles for Human-CPS Interaction,” ACM Transactions on Cyber-Physical Systems 8, no. 1 (2024); 22–26.
[20] Oriol Vinyals et al., “Grandmaster Level in StarCraft II Using Multi-Agent Reinforcement Learning,” Nature 575, no. 7782 (November 2019): 350–54; Christopher Berner et al., “Dota 2 with Large Scale Deep Reinforcement Learning,” arXiv 1912.06680 (2019); Julian Schrittwieser et al., “Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model,” Nature 588, no. 7839 (December 2020): 604–9.
[21] Noam Brown and Tuomas Sandholm, “Superhuman AI for Multiplayer Poker,” Science 365, no. 6456 (August 2019): 885–90; Meta Fundamental AI Research Diplomacy Team (FAIR)† et al., “Human-Level Play in the Game of Diplomacy by Combining Language Models with Strategic Reasoning,” Science 378, no. 6624 (December 2022): 1067–74; Justin Drake et al., “Human-Powered AI Gym: Lessons Learned as the Test and Evaluation Team for the DARPA SHADE Program: Human-Powered AI Gym,” in Practice and Experience in Advanced Research Computing 2024: Human Powered Computing (Association for Computing Machinery, 2024), 1–5; Nolan Bard et al., “The Hanabi Challenge: A New Frontier for AI Research,” Artificial Intelligence 280 (March 2020): 5.
[22]So called because Robinson Crusoe mistakenly believed he was living alone in nature on an island. See: George Tsebelis, “The Abuse of Probability in Political Analysis: The Robinson Crusoe Fallacy,” American Political Science Review 83, no. 1 (1989): 77–91.
[23] Carl von Clausewitz, On War, ed. and tran. M. Howard and P. Paret (Princeton University Press, 1976), 149.
[24] Werner Damm et al., “A Reference Architecture of Human Cyber-Physical Systems–Part I: Fundamental Concepts,” ACM Transactions on Cyber-Physical Systems 8, no. 1 (2024), 1–32; Bengler et al., “A References Architecture for Human Cyber Physical Systems, Part II,", 1–27; Damm et al., “A Reference Architecture of Human Cyber-Physical Systems – Part III," 1–23; Thom Hawkins and Daniel Cassenti, “A Utilitarian Approach to the Structure of Human-AI Teams,” (paper, 28th International Command & Control Research and Technology Symposium, Laurel, MD, USA, December 2023).
[25] Damm et al., “A Reference Architecture of Human Cyber-Physical Systems – Part III," 2–3.
[26] Damm et al., “A Reference Architecture of Human Cyber-Physical Systems – Part I," 2.
[27] Bengler et al., “A References Architecture for Human Cyber Physical Systems, Part II," 22; Timothy L Molloy et al., Inverse Optimal Control and Inverse Noncooperative Dynamic Game Theory: A Minimum-Principle Approach (Springer Nature, 2022), 143–225.
[28] Damm et al., “A Reference Architecture of Human Cyber-Physical Systems–Part I," 3; Damm et al., “A Reference Architecture of Human Cyber-Physical Systems – Part III," 2–3; Li and Lee, “Modeling Goal Alignment," 1538–1542.
[29] Damm et al., “A Reference Architecture of Human Cyber-Physical Systems – Part III," 14.
[30] Ibid.
[31] See, for example, R. Isaacs, Differential Games: Mathematical Theory with Application to Warfare and Pursuit Control and Optimisation (Dover Publications, 1965); Kalyanam et al., “Optimal Human–Machine Teaming for a Sequential Inspection Operation”; Tamer Basar and Geert Jan Olsder, Dynamic Noncooperative Game Theory, 2nd ed. (Academic Press, 1999).
[32] Basar and Olsder, Dynamic Noncooperative Game Theory, 11.