Skip to main content

Contained, Enplaned and Restrained: Strategic Brinkmanship in the Australian Context

Journal Edition

Abstract

Strategic brinkmanship, the preparedness to take a country to the edge of war without having to ultimately do so, has a powerful historical basis in the United States and China and is on the rise between those nations in the Indo-Pacific. Although their competition is multifaceted, the most significant security risk for Australia appears likely to play out in the race for technologically sophisticated autonomous systems and artificial intelligence (A/AI), where risk-taking could confer a decisive advantage. The implications for Australia of a regional military A/AI race are potentially immense, but the nexus between strategy and technology is frequently absent from contemporary Australian debate. This article will characterise the Australian strategic commentariat’s response to Indo-Pacific A/AI competition as contained, enplaned and restrained. In other words it is overly contained by geographic factors; too focused on traditional, crewed platforms as the proposed military hardware solutions; and not sufficiently focused on A/AI as both a serious threat and a necessary capability.


Introduction

The United States (US) and China are no strangers to the strategic brinkmanship that their Indo-Pacific competition will embolden and that has been foreshadowed in policy.1 Strategic brinkmanship is the preparedness to take a country to the edge of war without having to ultimately do so; or it could be interpreted as a nation engaging in limited forms of warfare with limited aims, with an expectation of being able to extricate itself on its own terms. When a nation is engaging in brinkmanship, policymakers from that nation will believe that their opponent will eventually find the risks to be intolerable and will de-escalate the situation.2

In rather different post-Second World War contexts, the US and China have both become expert at strategic brinkmanship; arguably, they know no other way. US Cold War nuclear doctrine was fully based on concepts of brinkmanship, perception and deterrence.3 A challenging security environment and a self-perception of its great power status have seen China similarly prepared to engage in strategic brinkmanship. From its conflicts in Korea, India, the Soviet Union and Vietnam in the 1960s and 1970s to its provocative occupation of disputed South China Sea islands and its indiscriminate use of new technology such as cyber exploitation, China has been prepared to risk military escalation and unintended consequences where it has perceived its core interests at stake, and is now considered a prominent and permanent ‘Grey Zone’ actor.4

Competition and striving for a strategic advantage in the Indo-Pacific is multifaceted. But the most significant risk for Australia appears likely to play out—both visibly and covertly—in the race for technologically sophisticated artificial intelligence5 and associated technology such as autonomous systems (referred to here as ‘A/AI’),6 where successful risk-taking could confer a considerable or even decisive advantage—perhaps even more so than other developing technologies such as hypersonic weapons.7 Although many prominent practitioners have argued for widespread prohibitions on A/AI in warfare applications,8 the Indo-Pacific seems likely to become a live testing ground for advanced and potentially underdeveloped military A/AI hardware, including in conjunction with nuclear capabilities.

Shadow of a plane on the land as it flies overhead.

The implications for Australia of a military A/AI race are potentially immense, even bringing into question the efficacy of Australia’s defence policy over time. But A/AI are mostly absent from contemporary debate. This article will characterise the Australian strategic commentariat’s response to Indo-Pacific A/AI competition as contained, enplaned and restrained: that is, discussion is overly contained by geographical factors; capability recommendations predominantly relate to traditional military hardware such as crewed combat aircraft; and commentary mostly downplays or even warns against embracing A/AI as either a serious threat or a necessary capability. The contained, enplaned and restrained nature of the debate appears inconsistent with the likely regional outcomes of superpower A/AI competition.

This article will contend first that, based on historical behaviour, US and Chinese competition in the Indo-Pacific will embolden strategic brinkmanship; second that the incentives to rapidly introduce military A/AI are high for both the US and China; and finally that current Australian commentary insufficiently recognises the need to comprehensively address A/AI as an intrinsic aspect of military strategy, arguably leading to somewhat dated prescriptions.

To the Brink of Extinction

Former US Secretary of State John Foster Dulles was popularly associated with geostrategic brinkmanship, and the need to maintain an immense capacity for nuclear and broader military action to ensure peace and stability. His acceptance of extraordinary nuclear risk was based on his belief that unwillingness to go to the brink of nuclear war would perversely increase the risk of nuclear war.9

Dulles was a prominent nuclear actor, but he had many companions. A generation of US political leaders and scholars made strategic brinkmanship an art form during the Cold War. Military technological competition with the Soviet Union could have easily seen thermonuclear capabilities used as an instrument for both political and military purposes. Despite a view that nuclear weapons encouraged stability, there was no assurance that the US nuclear force posture would deter or prevent war. A feature of Cold War nuclear policy development was its incremental nature, and US policymakers effectively built their nuclear strategy concurrently with doctrine, force development and military posturing.10 While many have argued that a ‘peace through nuclear strength’ approach resulted in lengthy periods of geostrategic stability, such an approach inherently required a preparedness to engage in brinkmanship. This manifested as deliberate escalation of tension at certain times,11 and acceptance of great uncertainty in circumstances of immense gravity,12 particularly when periodic technological change (or even perceived technological change) altered the balance of power.

This Cold War experience was fundamental to the US conception of competition and deterrence, and US policymakers will almost certainly rely on this experience and doctrine as new strategic competition intensifies. While nuclear weapons are still highly relevant to geostrategic competition, strategic brinkmanship will also be demonstrated through the use of other emerging technology.

Although China’s experience was predominantly outside the nuclear realm, and its actions were consistently framed by Chinese leaders as being strategically defensive in nature,13 China also has a long history of engaging in strategic brinkmanship. In part, this brinkmanship was a response to the perception of threats to the Communist state from all sides, combined with a 15powerful sense of great power status and nationalism.14 Perceived and real challenges to Chinese territorial integrity were viewed as especially egregious threats, in response to which Chinese leaders were prepared to risk more, even just to maintain the status quo or achieve incremental gains.

Chinese leaders viewed their involvement in post-Second World War conflicts as defending their vital interests. Nonetheless, China’s participation in these conflicts demonstrated strategic brinkmanship and a willingness to accept significant risk associated with conflict escalation. Chinese involvement in the Korean War was a strategic gamble, due to China’s lack of material prosperity, ongoing domestic conflict and inferior military capability.15 China initiated the 1962 war with India, described as a ‘large scale self-defensive counterattack’,16 and in doing so courted immense risks including the potential to invite US involvement on the Indian side; international isolation; and an inability to prevent an extended and costly conflict. Similarly, the 1969 conflict with the Soviet Union was a ‘scary close call’ that could easily have resulted in a nuclear exchange.17 China risked reprisal from major powers and its economic modernisation in initiating war with Vietnam in 1979.18

More recently, China has been prepared to take strategic risks in the South China Sea region. Chinese activity in this region is often classified as ‘grey zone’ activity. It has involved militarisation of features such as the Spratly Islands;19 aggressive military manoeuvres;20 ramming of fishing boats;21 consistent military incursions into disputed territory;22 aggressive and cavalier cyber attacks;23 and breaches of international law. Such actions are indicative of a national leadership that is prepared to take military actions beyond what others would view as within normal bounds of behaviour, thereby demonstrating a preparedness to increase the risk of conflict. These actions challenge the US, its allies and regional nations to escalate their response, and as a consequence increase the risk of unintended conflict.

Throughout its 20th century wars, while the US managed constant nuclear risk over several decades, China most visibly displayed strategic brinkmanship in crisis situations. Chinese crisis situations were frequent, however, and Chinese leaders also had to manage ongoing risk in fighting the Kuomintang. More recent Chinese grey zone operations show that China approaches risk, conflict and competition from a more continuous footing, and that military operations in peacetime form an inherent part of Chinese statecraft.24

In summary, although in different contexts, strategic brinkmanship has been central to the security approaches of the US and China since the Second World War. With geostrategic competition in the Indo-Pacific starting to characterise the US–China relationship,25 and with competition clearly being foreshadowed in high-level policy,26 it is likely that similar strategic brinkmanship will be realised. This article contends that the most important manifestation of strategic brinkmanship will be in military A/AI development.

Accelerating Algorithm and Tempo

A/AI development is regularly viewed optimistically, with a belief that society has the agency to effectively implement and regulate A/AI to achieve a positive net benefit. Areas such as health care stand to benefit significantly, so long as the ‘global good’ and ‘serving humanity’s interests’ are the highest priorities.27 Yet US Defense Secretary Esper’s declaration that ‘We have to get there first’, in relation to his view of US and Chinese development of cutting-edge military A/AI, suggests a more sobering perspective.28 Moreover, Secretary Esper’s definition of ‘there’ appears to represent a certain step into an uncertain realm.

The optimistic A/AI outlook seems difficult to reconcile with the world’s superpowers competing for military A/AI ascendancy and for control of closely related industries such as semi-conductor manufacturing.29 Prominent scholars have predicted that A/AI will cause the most significant transformation of warfare in the next 20 years.30 In 2017, Stephen Hawking argued that creating a successful AI would be the biggest event in human history but that, if the risks were not controlled, successful AI could also be the last event in human history.31 Whether this is an extreme view of A/AI is for now a matter of opinion; yet unrestrained and competitive A/AI development in the Indo-Pacific will surely move the region towards a far more dangerous strategic balance. The warning signs are evident, and technologists such as Rana el Kaliouby have identified that organisations and governments who own and control A/AI and data will have a significant advantage.32 Transparency of A/AI development will surely be an early casualty.

Autonomous systems and artificial intelligence are therefore at the leading edge of what some have termed the ‘security dilemma’: the idea that the efforts of some countries to improve their security by increasing military 17capability can cause escalatory competition.33 To be sure, A/AI is not the only aspect of the security dilemma currently capturing the US and China, but it is central to the concept of ‘offensive–defensive balance’. Scholars such as Jervis have argued that when a defensive force has an advantage over an offensive force (for reasons such as geography or military technology), the gap between two opposing nations’ capabilities would need to be greater for escalatory competition to occur.34 However, when offensive capabilities take the ascendancy, competition is incentivised.

Many commentators suggest that military technology, in general terms, currently offers nations an advantage when defending. For example, so-called ‘anti-access area denial’ capabilities present a formidable obstacle, during both periods of conflict and periods of competition, for any offensive force.

A/AI may be poised to change the costs of conflict, and potentially the balance between offense and defence, although this is uncertain.35 What is certain is that most nations are reluctant to expend blood and treasure in conflict. However, if nations must only risk treasure, through the widespread use of uncrewed and autonomous systems, the considerations for going to war fundamentally change. This is what previous US Secretary of Defense Mattis suggested when he argued that A/AI have the potential to change both the character and the nature of warfare.36 Further, the risk of conflict escalation is considered greater when nations have less experience with the military tools at their disposal;37 the novel nature of A/AI is likely to increase the risk of conflict escalation if these capabilities are at the forefront of superpower competition.

A/AI could also impact on existing theories of nuclear deterrence. For example, a country may use data analytics to seek to determine with greater certainty whether a pre-emptive nuclear attack could render a strategic competitor incapable of responding with its own retaliatory nuclear attack; or a country may seek to incorporate A/AI into an automatic nuclear response.38 Concerningly, nuclear warfare theory warns that the vulnerability of an enemy’s nuclear forces could actually encourage a first strike in a time of crisis.39 This will not be helped by the unpredictability of aspects of A/AI behaviour.40

While the majority of A/AI research is currently being undertaken for non-military purposes, the immense geostrategic benefits are almost certain18 to see military A/AI development grow (‘aggressively’ in China already, by some accounts).41 Indeed, military forces may be the first to use advanced A/AI extensively.42 Further, A/AI development has significant ‘dual purpose’ characteristics: President Xi’s personal responsibilities as the head of China’s Military-Civil Fusion Development Commission will ensure the flow of relevant A/AI technology from commercial to military hands.43

No Taking A/AI from Our Cold, Dead Hand

Although Australia’s focus will remain on the US–China competition, Russia is set to lead a new round of strategic brinkmanship, with military A/AI the battleground. With a fraction of the national resources available to China and the US, Russia is using A/AI development as its latest effort to maintain a level of global influence at a low cost.44 Predictably, Russia’s A/AI efforts quickly transitioned from the more mundane use of AI (such as to interrogate large datasets) to the active use of military systems with existential implications, such as the Poseidon autonomous nuclear weapon-equipped underwater drone, offering a troubling glimpse of how A/AI challenges existing paradigms of warfare (and general safety),45 and how quickly unconstrained actors like Russia can put threatening A/AI into operational service. Previous Soviet attempts at automating nuclear responses, including the ‘Dead Hand’ system, and other actions to integrate prematurely computer technology with nuclear decision-making46 show a longstanding predisposition in Russia both to remove humans from the loop of the employment of destructive weapons, and to a level of comfort in engaging in strategic brinkmanship.

China is unlikely to show a marked difference from Russia in the way it employs military A/AI, both during peacetime and in periods of heightened tension. The tens of billions of dollars invested by China in A/AI is not unreasonable for an economy of its size; nor is the scale of development and investment in technologically sophisticated A/AI-related industries.47 It is also reasonable for China to aspire to lead the world in A/AI, even by framing A/AI as a new front line of global and military competition.48 And these are not implausible ambitions. Even in a short space of time, Western assessments of Chinese military A/AI capability have moved from a view of Chinese military A/AI effort as ‘largely abstract and speculative’,49 to a view of rapid and definite Chinese military A/AI progression.50

However, it is not just the latent military potential derived from nation-wide Chinese A/AI development that has concerned some Western policymakers and scholars. It is the near-certainty of minimal checks and balances being applied by a Chinese Communist Party that perceives many internal and external threats, and faces ongoing territorial disputes. As competition with the US intensifies, and as the US introduces its own A/AI into the region, the People’s Liberation Army (PLA) will sense pressure to introduce A/AI capabilities, perhaps before they are fully tested and assured. If the US can effectively achieve widespread military automation to ‘reduce the number of warfighters in harm’s way’ and allow machines to ‘perform higher risk missions’,51 there is a clear incentive for the PLA to similarly and rapidly introduce A/AI into the region. Global appeals to deny greater use of lethal autonomous weapons systems before they are effectively regulated or even understood have proven uninfluential to this point.52

Other scholars have made similar arguments. For example, Kania assessed that the PLA may ‘prove less averse to the prospect of taking humans “out of the loop” to achieve an advantage’ than other nations.53 Others have agreed that autonomous military systems are a predominant focus for Chinese development,54 adding weight to the idea that China (like the US) can take advantage of nascent technology with little legislative or policy codification (or international agreement) to gain competitive security advantages.

Therefore, the PLA seems postured to push the employment of A/AI to its limits; and, given the national priority and resources, the PLA is likely to quickly become proficient in A/AI employment. The organisational effort being applied to A/AI; the enormous datasets available (particularly through China’s extensive intelligence collection and its centralisation of military intelligence under the Strategic Support Force); the historical willingness to adjust doctrine to account for changing technology and strategic circumstances; and the experience gained in China through domestic automated surveillance efforts all mean that the PLA is well placed to be an early adopter of the technologies.

China has not been completely silent on the threat that may be posed by military A/AI competition. Some Chinese officials have publicly articulated their concern about the threat that an ‘AI arms race’ could pose to humankind, for example by reducing the threshold for military-related action due to a perceived likelihood of fewer casualties.55 Similarly, Chinese military A/AI development has coincided with a trend in the literature recommending that the US and China forge an agreement to regulate the advancement of A/AI in a military context.

However, any agreement is surely unlikely in the near term. The overwhelming weight of evidence firmly points to the US and China working to quickly develop and introduce military A/AI. China will continue to introduce its most advanced hardware well before any international A/ AI agreements may be struck. For example, the Chinese ‘Marine Lizard’ appears to be an autonomous military amphibious vehicle which may form ‘swarms’ in advance of human soldiers, working in conjunction with other uncrewed aircraft and maritime platforms. Far from reluctantly or cautiously admitting to the existence of the Marine Lizard, the Chinese media enthusiastically introduced the platform to the public.56

Intelligence for Peace

Recent US analysis of the PLA has focused on the combat capabilities that would be used in conflict.57 While there is good reason for China’s combat capabilities to be prioritised, this focus has relegated the importance of Chinese actions outside any periods of conflict. Arguably, with a military philosophy that values ‘informatized warfare’,58 it is the information gained by China during peacetime that could decisively influence any future conflict. And much is known about China’s extensive intelligence collection capabilities, ranging from satellite collection to mobile telephony interception, human intelligence and radar surveillance. The level of China’s intelligence collection means that enormous quantities of data are being collected.

This collected data is central to A/AI efficacy. During periods of competition, China will use A/AI to enhance its pervasive intelligence and data collection in the region. AI is well suited to the task of interrogating large datasets and identifying correlations that may not be recognised through human analysis. Further, outside of conflict, A/AI may operationalise China’s enormous data repository to allow it to be used more predictively, facilitating reasonable forecasts about what other regional military forces would do in certain situations. For example, AI may establish that if the volume of mobile telephone communications rises from a certain base that is an indication of a maritime deployment. There is evidence that in domestic situations Chinese law enforcement agencies have used data analysis to pre-emptively respond to potential security problems before they actually occurred.59

Chinese policymakers have demonstrated little restraint in employing advanced AI technology for internal monitoring. Surveillance efforts in Xinjiang Province have been extensively documented, but this surveillance is the tip of the iceberg. Given the close relationship between domestic and international security policies, it is no great leap to suggest that similar capabilities (such as facial recognition) are now employed to achieve geostrategic effects. Indeed, many commentators have highlighted that external security and foreign policy is greatly influenced by China’s domestic security situation, and that the capability transition from domestic to international is rapid.60

Put simply, China will use A/AI and more traditional sensors to collect enormous quantities of data on US, Australian and other security forces during periods of competition. AI-supported interrogation of this data will lead to unique conclusions, predictive behaviours and algorithmic development for autonomous systems. This data can then be used to inform the decisive employment of A/AI-enabled combat capabilities should geostrategic tensions grow.

The US also has a vast number of military-related A/AI projects, and is almost certain to move ahead rapidly in many areas. Some have argued that the US could be as prone to the unethical or dangerous use of A/AI as the Chinese, particularly in the rush to reach certain goals. However, the public and congressional debate in the US; the amount of information relating to A/ AI being pushed by the military into the public domain;61 the longstanding US–Australia alliance and information-sharing arrangements; and some of the measures of transparency being undertaken by the US military62 provide an imperfect but much higher level of assurance of US A/AI practices in the Indo- Pacific. The same assurance measures are lacking in the Chinese context.

Covering our Ears and Closing our AIs

There are robust discussions about Australian defence policy and potential future threats in an era of Indo-Pacific strategic competition.63 There are also discussions in Australia about emerging military A/AI capabilities and issues such as ethics.64 However, these two topics are almost being treated as mutually exclusive. This article contends that A/AI is the fundamental technological advancement of the current period and that its implications must be translated into Australia’s defence policy. Failure to do so is leading to conclusions in Australian strategic commentary that appear outdated.

Strategically and technologically, much has changed in a short time. It is often argued that Australia’s 2016 Defence White Paper is dated in terms of its assessments relating to Indo-Pacific strategic circumstances. From an A/AI perspective, the 2016 White Paper also has the barest reference to ‘increasing automation’ in a period ‘beyond the next decade’.65 However, the subsequent debate among the strategic commentariat has rarely approached A/AI as a core issue for Australian defence policy, despite the fact that A/AI seems poised to greatly influence (or even supplant) current defence policies, particularly with US–China A/AI competition and enormous global A/AI investment continuing. Strategic commentators are perhaps not thinking in the ‘creative and unconstrained’ manner66 needed to grapple with the influence of technology on strategy.

This article identifies three trends in the Australian commentary that emphasise the inadequacy of the current discussion, as a result of which policy prescriptions tend to fall back on previous strategic debates. These trends can be characterised as contained, enplaned and restrained.

Contained: The Sea–Air Technology Gap

First, consistent with debates from the 1970s and 1980s, geography and the Australian military’s posture in relation to this geography continues to dominate much of the commentary. Debate is consistently contained around the need to posture military forces in Northern Australia67 and about the distance into the Indo-Pacific that Australia can militarily influence. Certainly geography is inescapable and is an essential topic for any discussion about force posture and military strategy, and there are strategic factors which are necessarily leading Australian policymakers to look closer to home.68

However, geographical limitations are now related more to political rather than military capability. Even 20-year-old technologies, such as cyber capabilities, have markedly changed the way military operations may be influenced by geography.69 Cyber effects can only be conceived in a global context, and their introduction should at least nuance any assessment that Australia cannot hope to achieve a decisive military effect beyond a tightly defined arc because it is ‘too remote’ and ‘too difficult to influence’.70 Just because Australia’s national security community has not used cyber capabilities to significantly disrupt another nation does not mean that such capabilities lack potency and range.

Autonomous systems and artificial intelligence have the clear potential to take this potency and ability to influence across a wider geographic extent to another level. A/AI and other technologies such as satellite miniaturisation add considerable further weight to the idea that, if prioritised, geography may become much less of a military limitation. Uncrewed logistic replenishment, adaptive electronic attacks on infrastructure, counter-propaganda systems, and maritime swarming capabilities are examples of existing systems that allow greater force projection.71 In the context of defending the wide expanses of Australia, the number of sensors, change detection capabilities and data sources available is immense and can grow.

The technology does not obviate the need to prioritise, but technological trends neccesarily offer more alternatives to mitigate traditional geographic constraints on military operations than are commonly presented. Geography is not an unchanging military constraint, and rapid A/AI development spurred by superpower competition will make the world smaller again.

Enplaned: Non-human Crew

Second, Australia’s changing strategic circumstances have tended to prompt recommendations relating to existing—crewed—military hardware. The strategic debate is enplaned on more crewed combat aircraft (among other traditional platform solutions) as the answer to new problems.

White recently recommended doubling Australia’s Joint Strike Fighter fleet and quadrupling Australia’s manned submarine fleet.72 Dibb and Brabin-Smith made similar arguments.73 Others have argued for retrofitting existing military capabilities to meet contemporary challenges.74 Such recommendations offer Australia greater military capability, and maximising existing capability is fundamental to Australian defence policy. And there is undeniably an enduring place in the Australian Defence Force (ADF) for sophisticated crewed platforms. But recommending ‘more of the same’ during a period of A/AI strategic brinkmanship is tantamount to prioritising combat against a previous conventional enemy.

‘More of the same’ appears poorly suited to mitigate the risks of regional A/AI competition. In operating against uncrewed swarming autonomous systems, there are many challenges for crewed platforms, not least that they are operating against systems that have no fear for their lives. For example, the relatively small number of missiles carried by crewed aircraft or maritime vessels may impact their efficacy against an A/AI-enabled physical swarming attack. On the other hand, a defence consisting of a swarm of defending autonomous aircraft; ground-based laser weapon systems; and cyber, satellite and electronic interference technology, combined with crewed platforms, may prove more effective. A greater mass of less expensive uncrewed aerial, land and maritime platforms, combined with defensive weapon systems that can target a larger number of attacking systems, may also put fewer military personnel at risk.75 It is hard to conceive that the optimum response to the challenges posed by superpower A/AI competition is ‘more of the same’.

Other common discussions demonstrate a disjunction between strategy and technology. For example, ‘hardening’ forward bases on the mainland and on islands such as Cocos (Keeling) is a regular argument.76 Uncrewed or autonomous combat systems, supported by remote intelligence sensors and cloud processing, would appear to be a logical line of investigation to meet the assessed need to harden such facilities, rather than the placement of ‘a permanent Army garrison’ on remote bases.77 Yet A/AI or even remote options are not commonly proposed.

While some have argued for increasing investigation of A/AI for the ADF and in support of existing military forays into A/AI,78 the commentary in support of A/AI has focused on ‘complementing existing major platforms’.79 Few arguments have been made that military A/AI systems should be considered in a manner and on a scale similar to major capital procurements. Some have presented restrained views, such as that Australia should look to procure military A/AI because the US military budget is being structured to do so. This is a reasonable argument in an alliance context,80 but not a viewpoint from integrated technology and strategy. A major project to procure A/AI systems across the air, sea, land, space and information domains, and its relation to Australia’s defence policy, seems worthy of consideration by Australia’s best scholars. Such a major project may be well within the capability of Australia’s military industrial base.

To be clear, existing crewed military capabilities are making the ADF considerably more capable than it has been in the past. The question is: is ‘more of the same’ the logical next step, given the A/AI competition in the Indo-Pacific?

Restrained: Betting Against the Machine

Third, Australian defence policy commentary largely relegates A/AI as an issue subordinate to other military considerations, a future problem, or an overstated technology. Scholars have largely and deliberately restrained discussion of A/AI in an Australian defence policy context. A/AI undeniably attracts the same hype that internet development and the ‘revolution in military affairs’ suffered from in the 1990s. A/AI development will not be a linear upward progression; nor will every military A/AI development be realised. Clearly it is too early to warn that A/AI will render conventional military capabilities obsolete.81

However, claims that A/AI represents a ‘false dawn’ seem more misleading than the risk of overstating A/AI’s potential.82 Some commentators have manufactured particularly novel arguments to maintain the status quo with conventional military platforms and de-prioritise A/AI.83 White has argued that autonomous systems and A/AI are problems to watch over time, rather than respond to. For example, in assessing the value of pilotless drones, he argued that the many complexities inherent in managing crewed aircraft are similar with uncrewed aircraft, and did not pursue a line of inquiry relating to threat autonomous systems.84

Such claims encourage a sluggish Australian uptake of A/AI, and less consideration of A/AI as a threat. Ultimately a wait-and-see approach represents a risky bet against enormous investments being made by the world’s two superpowers and powerful corporate actors, during a period when they are vying for A/AI dominance. Many systems are already being fielded in the Indo-Pacific, including the extensive use of A/AI in intelligence development. Dismissing the offensive and defensive aspects of military A/AI in the contemporary Australian context seems particularly unwise.

The contained, enplaned and restrained nature of the contemporary Australian commentary on A/AI is skewing the answers to Australia’s most important strategic questions. The strategic questions remain similar to those posed in earlier decades, but consideration of information age technology should almost certainly lead to more nuanced answers. For example: How far can Australia project decisive military power using information age technology? How quickly can certain mass-produced autonomous military equipment be mobilised for war? Has the nature of the US extended nuclear deterrence changed due to predictive systems? What is the information that Australia must protect during peacetime to ensure its greatest effectiveness during war—or is the fundamental operating model (of requiring classified information) now even possible?

A/AI should not be considered separately from, or as a vague supplement to, Australian defence policy. The Indo-Pacific risk factors that many Australian scholars now assess to have grown exponentially would see the extensive employment of A/AI against Australia in the envisaged scenarios. Yet many commentators remain unconvinced of the relevance of the accelerating change to the character of warfare, or that it is now playing out in the region. Australia’s defence policy discussion will be on a stronger footing if consideration encompasses how A/AI-related strategic brinkmanship is likely to transpire.

Conclusion

Strategic brinkmanship is spurring competition in military A/AI, and its effects are most pronounced in the Indo-Pacific. China’s A/AI development will be particularly opaque but highly active. This is changing the character of warfare, and is bringing into question the efficacy of national defence policies. Military A/AI capabilities are already being deployed, and some of these capabilities are likely to be underdeveloped and exhibit unpredictable behaviour as pressure to ‘win’ a US–China military A/AI race increases. A/AI capabilities will be brought to bear against Australia during periods of both competition and conflict.

Yet Australian strategic commentary has tended to treat Australian defence policy and military A/AI technology as two discrete issues. Commentary can be characterised as contained, enplaned and restrained: geographically contained ideas of military projection remain despite advancing technology at least caveating this approach; ‘more of the same’ military hardware is the most common policy prescription to address new challenges; and the importance of A/AI, from both an offensive and a defensive perspective, is consistently understated or avoided.

These aspects of the commentary are not allowing sufficient analysis of longstanding defence policy questions in relation to contemporary technology. The questions may remain the same but, as historically unusual as it may seem to observers of Australian defence policy, technology means that the answers may be changing.

Naval Ship sailing with the sunset behind it.

Endnotes


1 United States Department of Defense, 2018, Summary of the 2018 National Defense Strategy of The United States of America (Washington, DC), 1–2.

2 Robert Powell, 2003, ‘Nuclear Deterrence Theory, Nuclear Proliferation, and National Missile Defense’, International Security 27, no. 4: 91.

3 Glenn Snyder, 1961, Deterrence and Defense: Toward a Theory of National Security (Princeton University Press), 1–7.

4 ‘Grey Zone’ can be defined as military and influence competition below the threshold of major war. See Lyle Morris, Michael Mazarr, Jeffrey Hornung, Stephanie Pezard, Anika Binnendijk and Marta Kepe, 2019, Gaining Competitive Advantage in the Gray Zone: Response Options for Coercive Aggression Below the Threshold of Major War (California: RAND Corporation), ix, 27.

5 Artificial intelligence can be defined as ‘any artificial systems that perform tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from their experience and improve their performance’. See United States House of Representatives, H.R. 4625 (115th Congress, 1st Session, 12 December 2017), 3.

6 Autonomous systems relate to the ‘level of independence that humans grant a system to execute a given task in a given environment’. See United States Army, 2017, Robotic and Autonomous Systems Strategy (Virginia: United States Army Training and Doctrine Command), 23.

7 While it is impossible to know the future, a number of scholars have assessed robotics and AI to be areas that will cause revolutionary change to warfare. For example, see Michael O’Hanlon, 2018, Forecasting Change in Military Technology, 2020–2040 (Washington, DC: Brookings Institute), 1.28

8 Future of Life Institute, 2017, An Open Letter to the United Nations Convention on Certain Conventional Weapons, Future of Life Institute [website], at: https://futureoflife.org/ autonomous-weapons-open-letter-2017

9 Townshend Hoopes, 1973–1974, ‘God and John Foster Dulles’, Foreign Policy 13: 154.

10 For example, in the different perspectives of theorists such as Wohlstetter and Brodie. See Michael Howard, ‘Brodie, Wohlstetter and American Nuclear Strategy’, 1992, Survival 34, no. 2: 111–112.

11 Snyder, 1961, 6–7.

12 William Courtney and Bruce McClintock, ‘Stabilizing the Nuclear Cold War’, The RAND Blog, 13 February 2020, at: https://www.rand.org/blog/2020/02/stabilizing-the-nuclear-cold-war.html

13 Kenneth Johnson, 2019, China’s Strategic Culture: A perspective for the United States (Strategic Studies Institute, United States Army War College), 10.

14 William Callahan, 2004, ‘National Insecurities: Humiliation, Salvation, and Chinese Nationalism’, Alternatives 29: 199.

15 Bangning Zhou, 2014/2015, ‘Explaining China’s intervention in the Korean War in 1950’, Journal of International Affairs 1: 13–14.

16 John Garver, 2006, ‘China’s Decision for War with India in 1962’, in Alastair Iain Johnston and Robert S Ross (eds), New Directions in the Study of China’s Foreign Policy (California: Stanford University Press), 84, 96.

17 Robert Farley, ‘Billions Almost Died: In 1969, Russia and China Almost Went to Nuclear War’, The National Interest [website], 26 October 2019, at: https://nationalinterest.org/blog/ buzz/billions-almost-died-1969-russia-and-china-almost-went-nuclear-war-90986

18 Xiaoming Zhang, 2005, ‘China’s 1979 War with Vietnam: A Reassessment’, The China Quarterly 184: 859.

19 Steven Stashwick, ‘China’s South China Sea Militarization Has Peaked’, Foreign Policy [website], 19 August 2019, at: https://foreignpolicy.com/2019/08/19/chinas-south-china-sea-militarizat…

20 Luis Martinez, ‘Chinese Warship Came within 45 Yards of USS Decatur in South China Sea: US’, ABC News [website], 1 October 2018, at: https://abcnews.go.com/beta-story-container/Politics/chinese-warship-45…

21 Jason Gutierrez, ‘Philippines Accuses Chinese Vessel of Sinking Fishing Boat in Disputed Waters’, The New York Times [website], 12 June 2019, at: https://www.nytimes. com/2019/06/12/world/asia/philippines-china-fishing-boat.html

22 Lynn Kuok, 2019, How China’s Actions in the South China Sea Undermine the Rule of Law (Brookings Institute), 1.

23 Lyu Jinghua, ‘What are China’s Cyber Capabilities and Intentions?’, Carnegie Endowment for International Peace [website], 1 April 2019, at: https://carnegieendowment. org/2019/04/01/what-are-china-s-cyber-capabilities-and-intentions-pub-78734; Center for Strategic and International Studies (CSIS), 2019, ‘Significant Cyber Incidents’, CSIS website, at: https://www.csis.org/programs/technology-policy-program/significant-cyb… 29

24 Michael O’Hanlon, 2019, China, the Gray Zone, and Contingency Planning at the Department of Defense and Beyond (Washington, DC: Brookings Institute), 1–3.

25 David Lampton, 2019, ‘Reconsidering U.S.–China Relations: From Improbable Normalization to Precipitous Deterioration’, Asia Policy 14, no. 2: 43–44.

26 President Trump has signalled that he views the relationship almost exclusively in competitive terms. See Michael Swaine, 2018, ‘Chinese Views on the U.S. National Security and National Defense Strategies’, China Leadership Monitor 56: 1.

27 Janna Anderson and Less Rainie, ‘Artificial Intelligence and the Future of Humans’, Pew Research Center website, 10 December 2018, at: https://www.pewresearch.org/ internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/

28 Quoting US Defense Secretary Esper in relation to military AI competition with China. See Harry Lye, ‘Could China Dominate the AI Arms Race?’, Army Technology website, 20 January 2020, at: https://www.army-technology.com/features/china-ai-arms-race/

29 Alex Capri, 2020, Semiconductors at the Heart of the US–China Tech War: How a New Era of Techno-Nationalism is Shaking up Semiconductor Value Chains (Hinrich Foundation), 6.

30 O’Hanlon, 2019, 1. Also see ‘Artificial Intelligence is Changing Every Aspect of War’, The Economist [website], 7 September 2019, at: https://www.economist.com/science-and-technology/2019/09/07/artificial-…

31 Arjun Kharpal, ‘Stephen Hawking Says A.I. Could Be “Worst Event in the History of Our Civilization”’, CNBC website, 6 November 2017, at: https://www.cnbc.com/2017/11/06/ stephen-hawking-ai-could-be-worst-event-in-civilization.html

32 Rana el Kaliouby, ‘5 Truths About Artificial Intelligence Everyone Should Know’, Inc. [website], 22 April 2019, at: https://www.inc.com/climate-change-health-care-food-drink-community-up-…

33 Jennifer Lind, 2014, ‘Geography and the Security Dilemma in Asia’, in Saadia Pekkanen, John Ravenhill and Rosemary Foot (eds), Oxford Handbook of the International Relations of Asia (Oxford University Press), 719.

34 Robert Jervis, 1978, ‘Cooperation Under the Security Dilemma’, World Politics 30, no. 2: 188.

35 Ben Garfinkel and Allan Dafoe, ‘Artificial Intelligence, Foresight, and the Offense–Defense Balance’, War on the Rocks [website], 19 December 2019, at: https://warontherocks. com/2019/12/artificial-intelligence-foresight-and-the-offense-defense-balance/

36 Aaron Mehta, ‘AI Makes Mattis Question “Fundamental” Beliefs about War”, C4ISRNET [website], 17 February 2018, at: https://www.c4isrnet.com/intel-geoint/2018/02/17/ai-makes-mattis-questi…

37 Forrest Morgan, Karl Mueller, Evan Medeiros, Kevin Pollpeter and Roger Cliff, 2008, Dangerous Thresholds: Managing Escalation in the 21st Century (RAND, Project Air Force), xviii.

38 A/AI may have perceived utility to nuclear concepts such as ‘Launch on Warning’. See David Wright, 2016, ‘Nuclear Weapons and the Myth of the “Re-Alerting Race”’, Future of Life Institute website, 7 September 2016, at: https://futureoflife.org/2016/09/07/nuclear-weapons-myth-re-alerting-ra… 30

39 Marc Trachtenberg, 1989, ‘Strategic Thought in America, 1952–1966’, Political Science Quarterly 104, no. 2: 318.

40 Andrew Ilachinski, 2017, Artificial Intelligence & Autonomy: Opportunities and Challenges (CNA), 24.

41 Gregory Allen, 2019, Understanding China’s AI Strategy: Clues to Chinese Strategic Thinking on Artificial Intelligence and National Security (Washington, DC: Center for a New American Security), 5.

42 Peter Thiel, ‘Good for Google, Bad for America’, The New York Times [website], 1 August 2019, at: https://www.nytimes.com/2019/08/01/opinion/peter-thiel-google. html#click=https://t.co/XdDvxUSG7a

43 Elsa Kania, 2018, Technological Entanglement: Cooperation, Competition and the Dual-Use Dilemma in Artificial Intelligence (Australian Strategic Policy Institute, Policy Brief Report No. 7/2018), 8.

44 Alina Polyakova, ‘Weapons of the Weak: Russia and IA-Driven Asymmetric Warfare’, Brookings website, 15 November 2018, at: https://www.brookings.edu/research/weapons-of-the-weak-russia-and-ai-dr…

45 Norman Polmar, 2019, ‘“Status-6” Russian Drone Nearly Operational’, U.S. Naval Institute website, at: https://www.usni.org/magazines/proceedings/2019/april/status-6-russian-…

46 Doug Irving, ‘How Artificial Intelligence Could Increase the Risk of Nuclear War’, The RAND Blog, 23 April 2018, at: https://www.rand.org/blog/articles/2018/04/how-artificial-intelligence-…

47 Allen, 2019, 19.

48 Ibid., 3.

49 Jeffrey Ding, 2018, Deciphering China’s AI Dream: The Context, Components, Capabilities and Consequences of China’s Strategy to Lead the World in AI (Future of Humanity Institute, University of Oxford), 31.

50 Ely Ratner et al., 2019, Rising to the China Challenge: Renewing American Competitiveness in the Indo-Pacific (Washington, DC: Center for a New American Security) 3, 18, 21.

51 United States Army, 2017, 1–3.

52 Daniel Hoadley and Nathan Lucas, 2018, Artificial Intelligence and National Security (Congressional Research Service, R45178), 7.

53 Elsa Kania, 2017, Battlefield Singularity: Artificial Intelligence, Military Revolution, and China’s Future Military Power (Washington, DC: Center for a New American Security), 5.

54 Hoadley and Lucas, 2018, ‘Summary’.

55 Allen, 2019, 4.

56 Michael Peck, ‘Meet the Marine Lizard: Is China’s New Tank All Hype?’, The National Interest [website], 18 April 2019, at: https://nationalinterest.org/blog/buzz/meet-marine-lizard-chinas-new-ta… 31

57 Defense Intelligence Agency, 2019, China Military Power: Modernizing a Force to Fight and Win (DIA-02-1706-085), 24.

58 M Taylor Fravel, 2020, ‘China’s “World-Class Military” Ambitions: Origins and Implications’, The Washington Quarterly 43, no. 1: 85–86, 95–96.

59 Nathan Vanderklippe, ‘China Using Big Data to Detain People Before Crime Is Committed: Report’, The Globe and Mail [website], 27 February 2018, at: https://www.theglobeandmail. com/news/world/china-using-big-data-to-detain-people-in-re-education-before-crime-committed-report/article38126551/

60 Murray Scot Tanner, ‘China’s Social Unrest Problem’, testimony before the U.S.–China Economic and Security Review Commission, 15 May 2014, 2.

61 United States Department of Defense, 2019, ‘Campaign for an AI Ready Force’, Department of Defense website, at: https://media.defense.gov/2019/Oct/31/2002204191/- 1/-1/0/CAMPAIGN_FOR_AN_AI_READY_FORCE.PDF

62 Defense Innovation Board, 2019, ‘Defense Innovation Board’s AI Principles Project’, Defense Innovation Board (United States Department of Defense) website, at: https:// innovation.defense.gov/ai/ .

63 For example, Paul Dibb and Richard Brabin-Smith, 2017, Australia’s Management of Strategic Risk in the New Era (Australian Strategic Policy Institute, Strategic Insights), 9.

64 Marcus Hellyer, ‘Ethical AI for Defence’, The Strategist, 20 August 2019, Australian Strategic Policy Institute website, at: https://www.aspistrategist.org.au/ethical-ai-for-defence/

65 Australian Department of Defence, 2016, 2016 Defence White Paper (Commonwealth of Australia), 148.

66 Chief of Army, 2018, Accelerated Warfare: Futures Statement for an Army in Motion (Canberra: Australian Army), 2.

67 Paul Dibb, ‘Planning to Defend Australia in an Era of Profound Strategic Disruption’, The Strategist, 15 October 2019, Australian Strategic Policy Institute website, at: https:// www.aspistrategist.org.au/planning-to-defend-australia-in-an-era-of-pro…

68 Andrew Carr, 2019, ‘No Longer a Middle Power: Australia’s Strategy in the 21st Century’, Focus Stratégique 92: 11.

69 For example, see James Acton, 2020, ‘Cyber Warfare and Inadvertent Escalation’, Dædalus 149, no. 2: 133–134, for discussion of the potential for decisive global cyber operations against nuclear capabilities.

70 Hugh White, 2019, How to Defend Australia (Melbourne: Latrobe University Press), 73.

71 United States Army, 2017, 1, 3.

72 White, 2019, 187, 227.

73 Dibb and Brabin-Smith, 2017, 10.

74 Peter Hunter, 2019, Projecting National Power: Reconceiving Australian Air Power Strategy for an Age of High Contest (Australian Strategic Policy Institute, Special Report), 4.32

75 Paul Scharre, ‘Counter-Swarm: A Guide to Defeating Robotic Swarms’, War on the Rocks [website], 31 March 2015, at: https://warontherocks.com/2015/03/counter-swarm-a-guide-to-defeating-ro…

76 Dibb and Brabin-Smith, 2017, 9.

77 Stephan Frühling ,‘Reassessing Australia’s Defence Policy (Part 3): Preparing for Major War in the 2020s’, The Strategist, 6 February 2020, Australian Strategic Policy Institute website, at: https://www.aspistrategist.org.au/reassessing-australias-defence-policy…- 3-preparing-for-major-war-in-the-2020s/

78 Robbin Laird, ‘Shaping an Australian Navy Approach to Maritime Remotes, Artificial Intelligence and Combat Grids’, SLD Info website, 3 February 2020, at: https://sldinfo. com/2020/02/shaping-an-australian-navy-approach-to-maritime-remotes-artificial-intelligence-and-combat-grids/

79 Fruehling, 2020.

80 Michael Shoebridge, ‘Urgent Lessons for Australia in US Defence Budget’, The Strategist, 10 March 2020, Australian Strategic Policy Institute website, at: https://www.aspistrategist. org.au/urgent-lessons-for-australia-in-us-defence-budget/

81 Scharre, 2015.

82 Zac Rogers, ‘Have Strategists Drunk the “AI Race” Kool-Aid?’, War on the Rocks [website], 4 June 2019, at: https://warontherocks.com/2019/06/have-strategists-drunk-the-ai-race-ko…

83 For example, concern about an unmanned maritime surface vessel being seized by a ‘special forces-configured submarine deploying boarding parties in small craft’, with the special forces then using those autonomous systems against Australia. See James Goldrick, ‘“Taking Back the Seas”: Boosting the Lethality of Naval Surface Forces’, The Strategist, 26 February 2020, Australian Strategic Policy Institute website, at: https://www. aspistrategist.org.au/taking-back-the-seas-boosting-the-lethality-of-naval-surface-forces/

84 White, 2019, 187, 227.