25 Case Studies on the Global State of Defense AI
Springer, 2024, 603 pp, RRP EUR €49.99
Hardcover ISBN 9783031586484
Editors: Heiko Borchert, Torben Schütz and Joseph Verbovszky
Reviewed by: Callum Hamilton and Adam J Hepworth
Around the world, states and defence organisations are rapidly developing their capacities to seize emergent opportunities in the military use of artificial intelligence (AI). Recent technological advances have enhanced the integration of cloud computing, data infrastructure and user interfaces. These developments are increasing the scope of human-machine interaction and improving the integration of AI within military capabilities. Today, AI-enabled systems are in use in active Middle Eastern and European conflicts, and in grey zone actions within the cyber domain that are becoming increasingly prevalent throughout the world.
The Very Long Game captures these developments in a series of 25 national case studies that provide a representative sample to help us understand and contrast emerging approaches to the military use of AI. This is the most comprehensive analysis of its type to date, representing a valuable resource for defence policymakers, industry and innovation units seeking to make sense of and accelerate the adoption of evolving global military AI landscapes.
Readers in the field of military AI will benefit greatly from the introductory chapter, in which the editors synthesise case studies and perspectives of the contributing authors. By summarising the national case studies, the editors make it clear that most countries and militaries:
- aim to develop technological advantage and leadership in the global AI industry
- define AI based on its literal meaning, generally as machines performing tasks that ordinarily require human intelligence
- view AI as an enabling technology or force multiplier, and emphasise applications within existing capabilities and force structures
- aim to use AI to accelerate friendly decision-making and disrupt adversaries
- focus narrowly on machine learning techniques leveraging recent digital modernisation initiatives for data and cloud infrastructure
- lack identified pipelines, funded programs, development environments and open architectures to rapidly translate AI applications into military capability
- face market barriers and institutional challenges to integrating AI into national industrial bases, regardless of political system or economic model
- centralise responsibility for adoption of AI with accountable officers, project offices and innovation units at the defence or service level.
The Very Long Game’s use of case studies enables readers to compare and contrast national approaches, providing a useful perspective from which to recognise points of convergence and divergence. For example, most Western-aligned countries have mirrored the US approach, vesting responsibility for defence AI coordination, implementation and strategic guidance in some variation of a ‘Chief AI Officer’ supported by a project office. This approach contrasts with the Chinese strategy towards defence innovation, which has for some time focused on the ‘intelligentisation’ of warfare. ‘Intelligentisation’ refers to the need for information dominance, which can be achieved either by disrupting an adversary’s decision cycles or by outpacing them. To this effect, emphasising military users’ ability to exploit opportunities to apply AI at the edge may offer greater efficiencies compared to more reactive, centralised approaches. Other noteworthy national case studies include those of:
- Ukraine, which is engaged in ‘the first conflict where both parties compete in and with AI’, innovating and adapting with technology to seize transient advantages. Ukraine’s experience highlights that conventional ‘human-in-the-loop’ approaches to the use of AI may have become outdated. The Ukraine example indicates that human oversight is increasingly seen as a ‘formality’ that inhibits the potential of autonomous systems by increasing the costs entailed in achieving direct control in contested environments. Since the book’s publication, Ukraine has claimed that AI has been applied to some drones used during Operation Spider’s Web to strike targets at Russian airfields, even after loss of signal with human operators
- Turkey, which contends that AI enhancements to decision-making systems would accelerate the adoption of autonomous systems, which Turkey views as not yet suitable for effective human-machine interaction. From the Turkish perspective, AI offers an initial layer of machine-machine control to help humans make faster decisions and to simplify human-machine interactions
- the Netherlands, which is leveraging its leadership in international law forums to develop global AI governance frameworks based on legal principles and maintaining command accountability. Dutch-led initiatives such as the Responsible AI in the Military Domain Summit and the Global Commission for Responsible AI in the Military are driving militaries to translate principles into global norms for practitioners, while shaping their freedom of action to design and responsibly use AI-enabled capability.
The authors track the development of different waves of AI technology to highlight how states are harnessing recent advances in cloud computing, data infrastructure, and human-machine teaming in a military context. In response to recent commercial successes in applying machine learning and natural language processing techniques to ingest datasets and generate outputs, many states are now investing in similar capabilities to provide decision advantage to their future armed forces.
The Very Long Game highlights how militaries around the world are responding to emerging developments by investing heavily in data and digital infrastructure and platforms to develop, test, train and deploy AI technologies. The case studies emphasise a trend among states of implementing data-centric strategies for AI in the military. Data-centric approaches emphasise continually improving the quality, quantity and relevance of data to enable effective AI model training and performance. While this mirrors approaches commonly used in the commercial world, the editors raise some concerns about how data-centricity integrates with forces employing AI deployed in dynamic operating environments, often characterised by imperfect information and uncertainty.
The editors’ concerns are particularly relevant in land environments where deployed machines face significant friction to negotiate complex terrain and interactions with human-centric operations. Uncertainty and deception by an adversary are enduring characteristics of warfare that challenge the ability of humans and machines to effectively adapt and respond to a wide range of contested and unstructured operating environments. Given this known situation, it should be expected that adversaries will attempt to disrupt AI-enabled systems, such as by compromising training datasets or the input data that machines rely on to process the environment and produce correct outputs. AI-enabled systems must therefore be designed and developed to integrate with the decision-making sequences that commanders and operators use to navigate environmental and operational complexity. A commander’s ability to effectively and reliably use AI-enabled capability within operations requires situational awareness of prior decisions, adversarial actions, and an ability to anticipate expected behaviours and outcomes. The variability inherent in operational environments challenges defence organisations to train algorithms using datasets sourced from previous operations that are unlikely to be representative of future deployments. Synthetic data may supplement existing datasets, but it still requires developers to make assumptions about future adversaries and operating environments.
To address these challenges raised by the editors, data-centric approaches to AI model design and development must integrate with training and use at the tactical edge. Militaries face the technical challenge of providing computing power and networks to forward areas of operation, while end users face the operational challenge of controlling and adapting AI-enabled systems to their tactical needs in contested environments. There is therefore a need for decision-makers and developers to engage more closely with warfighter end users to understand how their needs may evolve depending on, for example, their area of operations, command, corps, sub-unit, platform(s), training and qualifications, or an adversary’s order of battle. Developing the capacity within data-centric approaches to continually adapt and reconfigure applications to suit different end users would allow AI-enabled capabilities to be scaled at the tactical edge. This outcome could be achieved by providing tactical-level specialists with tailored lines of support to meet the unique needs of end users applying AI across the integrated force.
The Very Long Game outlines the contemporary applications of AI in the military. As militaries adopt AI-enabled systems more systematically, there is an emerging requirement for AI policies that translate theory and principles into practice. To ensure the responsible use of AI in accordance with national policy and obligations under domestic and international law, states will need to be prepared to establish new systems of governance and assurance. The sociotechnical approaches that connect practitioners and AI models with established systems of control remain an open and evolving challenge for militaries globally.