Soft Power ‘Sky Net’ – The Importance of Artificial Intelligence for Information Operations
Introduction
Military practitioners are undoubtedly becoming more aware of the benefits that artificial intelligence (AI) can confer on military operations. But what exactly is AI? Popular media would have us imagine swarms of killer drones, unmanned vehicles such as personnel carriers, planes, tanks or even robots (maybe not unlike the droid armies from Star Wars). While no single definition for AI exists, there is a common understanding that AI is a programmed system that learns by itself, or can ‘learn how to learn’, and therefore can reason.[i] The application of this definition certainly includes the popular ideas noted above, but beyond science fiction, some regard AI’s greatest military utility as providing computer-assisted decision-making tools. The research and development of AI has existed for decades and progressively since the 1940s, so much so that many technology end users are oblivious to the presence of AI in their consumer electronic devices.[ii] AI’s advances are creating new opportunities for military use, which many of the world’s largest militaries understand and want to capitalise on.[iii]Among all the notions that fall within the concept of AI, a range of incredible possibilities also exist for how future militaries can execute information operations (IO) when considering the current, and future, capability of AI.
IO will evolve considerably over the coming decade and remain a key capability enabler for future strategic outcomes. The present-day information environment is complex and its ubiquitous nature has drawn the attention of the world’s super and middle powers seeking to gain a military advantage.[iv] This situation was recognised in July 2016 when the North Atlantic Treaty Organisation (NATO) acknowledged cyber as the fifth warfighting domain.[v] Such recognition significantly expanded the potential scope of IO into never-before-seen levels of connectivity and influence.[vi] Global powers are racing to capitalise on the military opportunities that AI presents, creating a sense of urgency for Australia’s challenge to develop its own AI-enabled capabilities. As a result, the Australian Army and broader Australian Government are closely interested in opportunities to develop a robust IO concept, supported by information-related capabilities (IRCs), to be ‘future ready’ and able to contribute with relevancy to coalition forces.
This paper will evaluate how AI can enhance information operations to benefit the Australian Army in the future operating environment. In doing so, it will describe a hypothetical near-future scenario as a thought experiment to illustrate AI’s future potential. An understanding of IO will then be developed, before two IO capabilities are explored—psychological operations and the cyber domain—to appreciate how AI will enhance their potential in the future. Using the scenario as the basis for analysing the potential of AI, the paper concludes with some ramifications for the Australian Army. Ultimately, as shall be shown, the Australian Army should adopt expertise in AI, specifically for the purpose of IO, if it wishes to capitalise on the associated opportunities that will almost certainly benefit from—and indeed may be reliant on—the employment of AI.
Thought Experiment
The following hypothetical scenario is a thought experiment whereby the potential consequences of AI-enabled information operations will be explored to aid in understanding this paper’s evaluation. Many of the capabilities described are, in reality, more advanced, while others are still very much conceptual yet are trending towards development in the near future.[vii] In that context, the thought experiment presented here may be considered to unduly favour one party to the hypothetical situation presented. Further, the solution posited depends on the delivery of a coordinated, synchronised effort by China that may appear unrealistic; ‘If only we could be so organised’ would be a perfectly acceptable response. However, as this paper will demonstrate, when combined with AI, the capability of certain IRCs has sufficient potential that the circumstances described by the thought experiment are entirely plausible.
Since 2023, the Chinese Communist Party (CCP) has expanded its regional influence within the South-West Pacific through a combination of economic coercion, the delivery of military and information technology capability, and subtle human intelligence efforts led by its United Front Work Department (UFWD). Importantly, by co-opting humanitarian aid and infrastructure development organisations, the CCP was able to generate positive regional influence towards its narrative.[viii] The increasingly favourable environment for the CCP, in turn, led to increased aggressive CCP military activity akin to that previously witnessed around the Spratly Islands and the broader South China Sea. In early 2025, the US responded to this perceptible shift in Chinese influence by increasing its commitment to the Cobra Gold exercise in Thailand and deployed a US-led coalition to the South-West Pacific. The mission was expeditionary, designed as a ‘tour de force’ to influence and counterinfluence the CCP’s success. While the primary purpose of the US-led coalition was to shape the strategic environment in favour of Western political influence, it also provided the basis to respond to an escalation in tensions, including any potential land-based conflict, with a corps-sized Combined Joint Task Force (CJTF).
Supporting the CJTF is the latest iteration of a machine-learning ChatBot dubbed ‘DC’ that provides automated public affairs updates and pro-US coalition targeted messaging to global governments, media outlets and large social media sites. DC was supported by military cyber elements from the Five Eyes nationalities to trial DC’s potential in both intelligence collection and information operations. Additionally, Australia, Britain, Canada, France, Japan, Malaysia, New Zealand and Singapore joined the US in committing ground forces to the Philippines, Japan and Papua New Guinea for partnered training operations, and to exercise their freedom of navigation in the South China Sea and surrounding maritime zones. These elements possessed tactical cyber units whose role was to enhance the host nation’s cyber defence capabilities, as well as to conduct digital reconnaissance for DC and counter any Chinese People’s Liberation Army (PLA) cyber activity.
On 15 March 2025 tensions escalated when, simultaneous to the ongoing military exercises, the PLA conducted military rehearsals of a similar scale to those witnessed after Nancy Pelosi visited Taiwan.[ix] In response, the US shifted the position of its maritime task forces closer to the first island chain and increased its deployment of air task groups into key strategic air bases in the Philippines, Malaysia and Japan. On 26 April 2025 the CCP’s Maritime Militia disrupted maritime movement around Singapore and the Indonesian islands of Bintan and Batam by harassing local economic shipping. Civil unrest followed in the months of April, May and June across Vietnam, Laos and Malaysia, promoted by CCP-sponsored criminal groups and an extensive anti-government campaign driven by influential online personalities (sponsored by the UFWD) that had been running for several months. Shortly afterwards, discontent erupted in Papua New Guinea and Solomon Islands, albeit with less social impact.
Over the same period, numerous ChatBots began a disinformation campaign about the Spratly Islands disputes. By exploiting cultural beliefs, trends, and topics of interest, the ChatBots successfully inflamed public emotions around beliefs in the West’s actions, which in turn began to polarise the abovementioned nations’ stance towards the US-led coalition’s behaviour.[x] The situation was exacerbated by a successful hack against DC. This hack compromised the integrity of the ChatBot’s data and implanted misinformation into its databases in a way designed to exploit political fault lines within the US community. These actions would ultimately lay the foundations for the public distrust in Western political and economic organisations that would build over the coming days.
Today is 28 November 2025.
Thirty-four days ago, a US strategic surveillance asset was lost over Singapore while monitoring the Spratly Islands. After a failed recovery operation, it was ascertained that the asset was grounded by a cyber-attack that originated in Macau. The asset was recovered by CCP Maritime Militia. Damage control measures were enacted by the US but not before several secret compartments may have been breached through a combination of the asset’s connectivity to the US satellite system and a zero-day vulnerability identified in the Microsoft SharePoint software used within the US’s Secret Internet Protocol Router network.[xi] The assumption that the US Department of Defense’s secret information technology had been breached by a zero-day vulnerability, and conjecture about whether China truly possessed the capability to breach US satellite security within the time frame, paralysed US decision-making. Consequently more drastic security controls, such as complete network closures, were delayed.
Twenty-three days ago, on the evening of 5 November, a suspected CCP psychological operations campaign was launched across many news networks (both Western and Eastern), as well as across multiple online news and social media platforms. Based on disinformation promulgated through this campaign, many international headlines declared that US coalition regular and Special Forces had committed war crimes in Manila and Ho Chi Minh City. Later the same evening, unidentified forces launched attacks on coalition bases at Clark Air Base, at Subic Bay and against the Philippines Joint Task Force National Capital Region headquarters in Manila.
Breaking international news on the morning of 6 November showed video footage of the alleged attacks. The reports displayed coalition forces in contact with fleeing Filipino civilians. No recognisable insurgent or threat forces appeared throughout the imagery. Most concerning to Western nations, even the headlines of reputable news services reinforced the previous evening’s reports of coalition war crimes. For well-informed observers, the media coverage was clearly unbalanced. Nonetheless, it resonated widely across the globe. In response, the US strongly denounced the reports as fake and launched an investigation into the facts to buttress its position. The US’s public announcement of the investigation was immediately followed by media reports of voice recordings from coalition commanders that included verbal commands to ‘execute the prisoners’ and ‘shoot anyone to our front’. The recordings were provided to the BBC and SBS via emails from their own organisations’ news reporters located in the Philippines. Within 24 hours, the news outlets acknowledged that their reporters’ email accounts had been hacked.
US analysis of the voice recordings has quickly established that the transcripts were captured from combat radio nets, satellite communications transmissions, and personal phone calls that were relayed through CCP-compromised satellites and business infrastructure local to Manila, including the 5G cellular network. When the individuals who allegedly made the commands were questioned, they recognised their voices and confirmed they spoke elements of the transmission. However, they strongly deny any wrongdoing. Officials believe that the voice recordings have been modified. Regardless, they have proved convincingly real, and both public and government confusion is hindering an effective and unified response.
Unknown to the US, the source of the recordings was a specialist cyber unit of the PLA—Unit 61398. This unit used digital transmissions of coalition ground forces (that had been captured from radio, satellite and internet-based communications) and then modified the raw data to generate statements that supported the ‘killing civilians’ deception. The majority of this work was conducted in real time by an AI algorithm that had developed a database of captured transmissions and was monitoring all news services worldwide.
At the same time, families of soldiers deployed to Manila have begun receiving phone calls from ‘US officials’ (in reality, fluent and articulate English-speaking members of the CCP provided with phone numbers captured through a cyber-attack and aided by information obtained in the Predator drone exploitation). In these calls, family members are advised that their relative is under investigation for war crimes and are asked to call an official hotline if their serving relative contacts them. Simultaneously, social media platforms and UFWD-influenced media outlets have published a list of names of soldiers under investigation for war crimes. Included in this list are the names of the soldiers and commanders featured in the voice recordings. The perceived veracity of these recordings has been further reinforced by the public release of a deep-fake video communiqué showing the Joint Task Force commander issuing arrest warrants for these suspects.
With time, the facts have revealed themselves. However, many people still question the truth. This situation is exacerbated by ongoing media stories that continue to discuss the situation (fuelled by numerous AI algorithms). Many experts push strongly to discredit the Chinese IO campaign; however, the speed of the campaign’s execution has made this very difficult. Furthermore, since the height of the incident, the political and military decision-makers of all Western nations involved have been purely responsive and defensive, degrading their ability to regain the initiative in the situation.
In the last 48 hours, internet access in southern Japan, Hong Kong, South Korea and Taiwan has been cut, while a Wall Street hack has inserted a modified automated trader algorithm that has instigated a $1.5 trillion collapse. Additionally, global marine traffic services have been hacked to show a large flotilla of Chinese cargo and military ships moving north to the seas around Taipei. Simultaneously, real cargo vessels with Singaporean RFID markings have disgorged ground forces inside Kaohsiung City port. The forces have commenced seizing their immediate area and establishing A2/AD capabilities. Earlier today, the Singaporean Prime Minister publicly denied any Singaporean involvement and blamed the CCP. This denial was quickly followed by a second public statement (a deepfake) by the Prime Minister, retracting his initial denial and claiming instead that the Singapore Armed Forces (SAF) had ‘gone rogue’.
Six hours ago, it was confirmed that the Kaohsiung City attackers were members of the PLA dressed in SAF-pattern camouflage uniforms. The confusion has bought the PLA approximately 12 hours, which may prove decisive in the coming days.
Information Operations
Information warfare, information support operations, information environment, psychological operations, electronic warfare, public affairs, strategic communications, and cyber space operations are terms commonly used in the context of IO and have their genesis in the 20th century. Despite the relative infancy of these terms, IO is as old as warfare itself. One of the oldest written works of Western literature, the Iliad, discusses the Greeks’ use of deception (a form of IO) against the Trojans by building a large horse as an offering to Athena for their transgressions at Troy.[xii] Unfortunately for the Trojans, the Greeks had not fled the field of battle and their offering contained a deadly cargo. Similarly, the much-quoted strategist Sun Tzu described all warfare as deception, alluding to the fact that all warfare combines the use and misuse of information, supported by martial skill.[xiii]
Examples of deception and information manipulation permeate history and demonstrate that IO is not merely an addition to warfare or a modern concept. Rather, it is integral to the very nature of warfare, competition, and even cooperation. All wars are influenced by IO, just as they are by manoeuvre. The United States Army’s latest operations doctrine highlights the advantage the information dimension offers to friendly force manoeuvre, denying the same advantage to the enemy, and how movement and manoeuvre are interwoven with IO as they mutually support one another.[xiv] Indeed, the relevance of information and influence to the effective prosecution of military operations is so pronounced that some commentators have claimed that IO should be the hub from which all other components of military operational support radiate.[xv] Whether or not this assertion is firmly founded, it illustrates that IO remains highly relevant to the successful conduct of contemporary and future operations.
Notwithstanding the unpredictability of world history, one enduring observation can be made based on events of the last century. Specifically, the transition between conflict, competition and cooperation can be swift and sometimes simultaneous. It has never been linear or evenly spaced in time.[xvi] This truism is unlikely to change, but the speed at which transition occurs is set to hasten, particularly given the unprecedented global proliferation of information technology. Due to this rapid change, predicting the nature of warfare is possibly more difficult now than it has been during any other period in human history. Yet what is predictable, with a high degree of certainty, is that the threat from foreign information operations will increase, necessitating that all nations compete in the information environment.[xvii]
To compete in the future strategic environment requires effective IRCs. Two of the IRCs which stand to improve the most from enhancements to AI technology are psychological operations (PSYOPS) and cyber operations. Combined with powerful computing power, and advanced AI algorithms, these IRCs could produce a truly unique and disruptive capability—an AI IO machine. To explain why, this paper will consider the relevance of each IRC to near-term future operations. This analysis will provide the basis for further discussion about the potency of AI.
Artificial Psychology
PSYOPS is concerned with influencing others and is often combined with deception to blend truth and fiction. The purpose of PSYOPS is to cause the target to take actions that support the accomplishment of the friendly mission.[xviii] PSYOPS has been an enduring element of warfare and manoeuvre since mankind began considering warfare as a profession.
In the modern era, PSYOPS is routinely incorporated into offensive support. The purpose of offensive support in military options is to apply ‘cannon, rocket and missile fire as well as [to help] integrate all lethal and nonlethal fire support assets into combined arms operations’.[xix] Planning offensive support is often referred to as ‘targeting’. PSYOPS contributes to targeting by providing information concerning cultural sensitivities, providing messaging towards friendly and threat forces (divisive or cohesive) and countering enemy propaganda.
Throughout the targeting process, PSYOPS provides a range of support to military decision-makers. For example, it provides information concerning the potential cultural impacts of proposed targeting missions; assists in target location using local information collection, as well as provoking the target to reveal itself; delivers deception effects that, in turn, support the delivery of lethal fires or that disrupt the threat’s manoeuvre or tempo (the ability to act more quickly than an opponent); protects friendly capabilities by spreading misinformation about the force’s capabilities and locations (for example); reduces the risk of civilian casualties by informing the need to encourage their departure from target areas; and reinforces the success of a targeting mission (demoralising the threat after a successful fire mission, for example).[xx]
Another developing area of PSYOPS is diplomacy and reputational manipulation. This aspect of PSYOPS refers to ‘the act of creating a false belief that an event has occurred to influence geo-political decisions’.[xxi] Modern AI networks can produce fake media that blends video and audio with incredible realism and distribute such media at a speed that ensures the information environment can be saturated with false information quickly.[xxii] Such fakes can take the form of media fabricated from scratch, or what are coined ‘deepfakes’ which involve superimposing the target (most commonly their face, and to a lesser extent their voice) onto other media to discredit them.[xxiii] Contemporary technology is sufficiently well advanced to create impersonations of anyone, but the quality of the media generated is directly related to how much original content is available. For example, at present, hours of recorded video of a target’s face are required to generate a credible short-time-frame impersonation.
The ease with which audio fabrication can be achieved is improving at a particularly rapid rate. Audio fabrication was perhaps first highlighted in 2017 by the speech synthesis technology Lyrebird, which generates lengthy imitation clips from as little as one minute of recorded audio. Now, in 2023, the software VALL-E can imitate voice with as little as three seconds of a sample.[xxiv] Moreover, technology capable of creating fake but authentic video and audio can now be achieved with standard consumer-level hardware, granting anyone with access to cheap computing technology the power to generate convincing fake media.[xxv] Even without video and audio, modern AI ChatBots (discussed further below) can create convincing information that can flood the internet rapidly.[xxvi]
The potential for these capabilities to disrupt the reputation of key leaders of organisations and to inflame societal tensions is self-evident. For example, a credible and well-planned deepfake campaign could influence elections and aggravate political tensions, potentially changing democratic elections.[xxvii] Further, as it becomes increasingly difficult to discern real from fake products, authorities may encounter decision paralysis. For instance, if a brigade commander’s voice (or worse, face) were convincingly mimicked, the potential would exist for their orders to be ignored, questioned or—worse still—executed. Until the truth was confirmed, the risk of military miscalculation would be unacceptably high. Beyond the political and military context, far-reaching consequences of such technology extend to government policy-making, legal and judicial systems, and even health care, including the administration of drugs and treatments. Further, the number of adolescents who access the internet is increasing.[xxviii] Ethical considerations aside, readers should ask themselves what would be the impact of AI-driven influence campaigns against a population’s teenage youth—then consider that such influence already occurs from digital marketing, rendering it difficult to see how military or government IO could not also play the same game. Of course, none of this would be possible were it not for the near omnipresent access to the cyber domain around the globe.
Cyber Operations
Uniquely the cyber domain permeates all other domains, linking them together in an unprecedented fashion.[xxix] Unlike the other domains (land, sea, air and space), which make physical contact with one another and can influence each other with a physical attack, the cyber domain has no physical presence. While information technology creates connections and interfaces with cyber that allow the other domains to ‘touch’ it, the cyber domain remains ‘not real’. Put another way, physical computer systems can be destroyed—severing the physical link to the cyber domain—but the data contained within may be backed up on another system, thereby rendering it immune to physical attack.[xxx]
Attacks conducted through the cyber domain can readily disable physical equipment. For example, they could force a Joint Strike Fighter to crash or drop its weapons early, thus physically influencing other domains (the air domain, in this case).[xxxi] In this new operating environment, actions traditionally isolated within one domain (such as land) can now be simultaneously linked through cyber warfare to another (such as the sea) thus generating new military effects and commensurate vulnerabilities.
As they are able to be generated using low-skill workers and at low production costs, it is no surprise that the future will only see cyber threats increase in pervasiveness.[xxxii] If the ‘internet of things’ continues to evolve at its current rate, the future cyber environment will contain trillions of connections, each of them providing a vector for an attack, information collection, or influence.[xxxiii] In such a world, the threats posed by malicious software include corrupting information; destroying critical IT infrastructure; collecting information; deploying false information; securing communications links to gain access to information deeper in the network; acting as a ‘digital deception’; and targeting life-support functions (such as subtly increasing air conditioning temperature to agitate the human workforce)—among other creative uses. These threats could come from states of all relationships, criminal or terror groups, issue-motivated groups, or civilians—not unlike what has been demonstrated, and is expected, from hybrid warfare and the increasing use of social media.[xxxiv]
Social media is an important consideration for IO and how influence can be generated in the cyber information environment. The rapid growth of social media over the last 15 years has demonstrated its potential as an exceptional repository of information and a communications node with global reach. Being information dense, social media platforms contribute immensely to the information environment, which benefits and disrupts IO efforts. For example, the information presented publicly on social media can readily generate viable and effective influence effects. After all, as has been observed that ‘[o]n any given day, there are up to 3.9 billion people online, all theoretically within range of a meme’.[xxxv] Information obtained from social media is increasingly likely to support the creation and understanding of network maps concerning individuals and organisations. This in turn will support better understanding and preparation, or shaping, of the operating environment and strategic influences.
Artificially Intelligent Information Operations
Machine-learning algorithms have incredible potential to support all facets of IO. Consider, for example, the capabilities of modern data prediction capabilities utilised by many commercial organisations. We see their effects daily in browsing recommendations, targeted advertisements, social media content, suggested online searches, and ‘helper’ software generated by products such as Google and Siri. While these are already impressive examples of machine learning in operation, with further technological improvements it may not be long before algorithms can create a plausible online replication of an entire human.[xxxvi] One contemporary example of an AI that can have realistic human conversations is ChatGPT. Released in November 2022, ChatGPT utilises natural language processing (NLP) to engage in conversations with users, answer questions, and even create articulate poetry and lyrics.[xxxvii] The convergence of NLP-based AI algorithms and those with the capability to replicate voice, such as VALL-E, creates a situation never before witnessed in computational technology and IO. The ever-increasing computing power of information technology means creating digital humans may not be as fictional as it once was.
The processing capability of modern computers enables AI IO machines to support intelligence collection by sorting and filtering through vast collections of information at breakneck speed. Indeed, the pace at which AI can potentially achieve influence effects surpasses anything previously attainable by traditional mediums, such as print or radio. Within the context of modern conflicts, such technology has significant potential if it is harnessed to collect information on targeted individuals and groups. AI, therefore, can understand culture thanks to the vast array of freely available data, rendering it capable of predicting how to best influence any population that has a presence on the internet. For example, AI could gather open-source information from social media to predict the sensitivities of a target audience and thereby maximise the likelihood of a provocative response as a planned outcome of an influence operation.
The strategic reach of AI IO machines is akin to that of other military ‘deep strike’ capabilities—capable of reaching populations, states and non-state actors regardless of physical borders. However, unlike traditional deep-strike capabilities, the AI IO machine has greater potential to divide a target population—creating rifts based on ethnicity, culture, nationality, gender, sexuality, political alignment, associations, or employment—instead of uniting them against a common threat. The potential disruptive effect of such a capability can be illustrated when contrasted with the German blitzkrieg mounted against the English population during World War II. Intended by Germany to degrade the will of the English people (a relatively common IO objective), the bombings of London and other strategically important cities only served to unite the British people under a single, shared hardship.[xxxviii] The intended IO effect failed because Germany did not understand the British mindset; nor could it monitor the psychological impacts of the bombings in real time. The result was that Germany could not have appreciated that continuing to bomb the British population only served to galvanise their resistance. Yet in the 21st century, anyone can access social media, online and 24-hour news, memes and other internet mediums to understand and assess manipulative effects on a population. A modern Blitzkrieg could just as easily occur through the digital bits of the internet as through the forest of the Ardennes in 1940, akin to what the thought experiment highlights.
While having the capacity to operate as an offensive capability, AI has a defensive function too. Specifically, it can assist military forces to counter ‘disinformation’ campaigns. Algorithms to detect fake media (such as deepfakes) would certainly benefit a defender—particularly if they could operate in a sufficiently timely way to counter the potentially disruptive effect of fake media. AI could also be used to effectively censure fake information on social media platforms by de-emphasising its importance and minimising its publication on any given media feed. Notwithstanding AI’s potential to support defensive operations, however, militaries seeking to capitalise on the capability will face the same challenges that arise with many other more conventional countermeasures—the attacker always holds the initiative. Specifically, once AI is used to counter disinformation, the capability can be analysed and overcome by the offensive force.[xxxix]
Linking It Back and Implications for Australia
Based on this paper’s analysis of current and emerging technology, and speculative future concepts, the thought experiment scenario presented here is neither unreasonable nor unfeasible. While the hypothetical IO campaign described would require sophisticated technology-enabled capabilities, able to breach many layers of a coalition force’s security, and a degree of chance, it is nevertheless possible. Future advances in computing power will increase an AI algorithm’s power to monitor the internet to calculate social trends, generate plausible fake news, and predict and influence the actions of groups, societies, or even nations.[xl] With this information, the scenario showed that a nation could position itself to disrupt government and military decision-making through information overload, degrade unity by exploiting known cultural or political schisms, provoke responses through misinformation, diffuse adverse situations through information fatigue, confuse the population with fake media and messaging, and deceive military commanders.[xli] Furthermore, it can do so simultaneously and in a time frame that cannot be matched by human decision-making. Clearly, such a capability has the potential to provide a distinct asymmetrical advantage, which is why the race to develop one is very real.[xlii] It would pay for the Australian Army to ensure it maintains pace with the competition.
Given that Australia no longer possesses the economic and military advantage it once did, in a future conflict it will probably face an adversary of greater power.[xliii] To fight in such a situation would require what prominent Australian defence analyst Dr Albert Palazzo defines as a strategy of aiming ‘not to lose’—that is, centred on maintaining the status quo.[xliv] Such a strategy is well aligned with the 2020 Defence Strategic Update’s objectives of ‘shape, deter, respond’, which is essentially a strategy of deterrence by denial. For Australia, any AI IO capability will never provide a decisive advantage. Australia simply cannot outpace regional competitors in the information technology field (unless, of course, it realigns its educational and economic investment). Rather, AI IO is relevant to Australia to simply ensure it maintains pace with the world, can shape our strategic environment instead of being able to compete in absolute military power, and can keep Australia operational in the global IO theatre.
To deter by denial, Australia must be able to project itself forward into the Defence Strategic Update’s defined primary area of operations, and it must be able to do so in the information dimension across all domains. Australia has always recognised the need for forward presence and influence, which is why it has historically placed such importance on international engagement—so that it can form coalitions and have regional access to key terrain along the approaches to Australia.[xlv] In the future, AI-enabled IO would significantly contribute to international engagement by shaping perceptions of Australia’s actions and its relationship with other nations, as well as understanding the politics and decisions of others (and ourselves). Additionally, AI IO would also play a key role in shielding any force projection by helping hide it from an adversary’s pervasive physical and digital surveillance. Yet Australia’s competitors are like-minded and already conduct the same types of influence operations. One only needs to turn to China’s actions since the turn of the century to witness such competition.[xlvi]
Australia will never be able to directly compete with military powers larger than itself, and in the grand scheme of the Indo-Pacific, Australia is a relatively minor military power. To be able to remain competitive, and be able to influence important unaligned regional powers, it must seek an advantage outside of raw military power. Such a deficiency is where AI-enabled IO could help Australia thrive. Recently the Australian Army published a Robotic & Autonomous Systems (RAS) Strategy, which goes a long way to codifying how the Australian Army will adapt to the emerging AI trends and includes collaboration with Microsoft Corporation.[xlvii] The RAS Strategy highlights AI’s potential for using information for improving awareness, prediction and collaboration.[xlviii] Yet analysts have criticised these efforts, noting they will serve to ensure the Australian Army will ‘do things better, but it won’t necessarily be able to do better things’.[xlix]
Despite highlighting the importance of developing key capabilities to gain a competitive advantage (aka a capability offset), the RAS Strategy remains focused on individual soldier performance, decision-making, human-machine teaming, protection and efficiency. There is no mention of using AI (or RAS) to enhance IO, which is a potential deficiency. Yet encouragingly, the RAS Strategy not only aptly highlights the need to train and sustain personnel with AI skillsets inside the Army but also recognises the need to improve understanding and ‘literacy’ in relation to AI across the entire force.[l] The Australian Army has clearly identified the need to act to prepare for and capitalise on AI’s potential. It would be prudent to ensure that the implementation of the strategy includes an appreciation of how Army will use AI to enhance IO, as evaluated in this paper.
Conclusion
Future AI, accelerated by computing power advances, promises to enhance the use of information, and IO, exponentially. The first nation or organisation to produce an AI IO machine similar to that discussed will possess an information power beyond anything that mankind has previously witnessed.[li] As highlighted by the thought experiment, such power harnessed to drive an AI IO system could create an information environment that overwhelms an opponent’s ability to understand the truth and make sense of the world, creating distinct tactical, operational and strategic advantages. The thought experiment scenario is not an attempt to predict a specific event, but rather intended to unshackle the reader from the concept that history is destined to repeat itself and help them begin to visualise just what might (and might not) come to be.
By exploring the AI IO concept, this paper has evaluated how the Australian Army can utilise AI to ensure it remains competitive in the 21st century information environment. What remains outstanding in this discussion is ethics—something that most certainly constrains the application of AI to IO, and is heavily dependent on culture. Investigating the ethics of AI is a very big discussion and beyond this paper, but it must occur should the ideas presented be taken further. Yet two things are for certain. The first is that technological advancements and societal trends mean the information environment will only grow in density and complexity, offering both opportunities and challenges on a scale that has never existed in previous conflicts. The second is that AI will almost certainly have a critical role to play, particularly as businesses and organisations seek to capitalise on AI’s potential to enhance their bottom line. The nation, or non-state actor, that first succeeds in harnessing this capability will possess an overwhelming advantage, the likes of which have not been seen since the United States alone possessed the almighty power of a nuclear arsenal. Whether it wants to or not, the Australian Army is entrenched in a race towards obtaining an AI-enabled advantage for IO. It would be wise to ensure it doesn’t fall behind.
Army Commentary
Callum Muntz’s piece provides a provoking insight into the value of artificial intelligence (AI). The scope and depth of the impact of this suite of technologies is starting be felt both within and externally to Defence—some of the influence of chatbots on social media has been a topic of very public scrutiny. The blurring of the provision of accurate information and timely understanding through the use of AI agents creates significant uncertainty and may deliver the decision paralysis that is alluded to in the piece. However, this may only be a transient opportunity as the countering and detection of such AI as deepfakes are developed. The importance of certified AI model training data, testing standards and cyber-worthiness are all known challenges to ensure that systems are robust and trustworthy. AI is but part of a suite of tools including data analytics and cloud computing techniques that will enhance a future force.
In the manoeuvrist approach we seek to undermine the will of the adversary at all levels. IO tools as proposed could be a significant enhancement. In some areas the technology is more advanced than the author portrays, so the need to seize the opportunity as well as developing the counter is key. Of course, access to the target audience is critical to achieve the penetration that IO may seek to have. The current Ukraine war aptly shows how limiting access to social media, for example, can reduce the impact of IO; Russia has managed to retain broad support for its war domestically through this means. So even with the AI tools proposed there is no certainty of success.
Army is currently exploring the role of AI as an enabler in a number of areas including autonomy and decision advantage; as the author articulates, the RAS Strategy is a start point. The Army Quantum Technology Roadmap also serves to highlight the potential for quantum technologies including computing. While the article takes a narrow view of the application of these technologies, the implementation and supporting services to enable such outcomes should not be underestimated.
This is a thought-provoking article that shines a light on an application of AI that could provide opportunity.
RC Smith OBE, CSC
Colonel
About the Author
Major Callum Muntz is a proud father, husband, and infantry officer currently serving with the United States Army in their I Corps Headquarters.
Endnotes
[i] ; Kai-Fu Lee, 2018, AI Superpowers: China, Silicon Valley, and the New World Order (Boston: Houghton Mifflin), Chapter 1; Jonas Schuett, 2019, ‘A Legal Definition of AI’, SSRN Electronic Journal, preprint arXiv:1909.01095; Alain Cardon, 2018, Beyond Artificial Intelligence: From Human Consciousness to Artificial Consciousness (John Wiley & Sons), ix–xi.
[ii] Nick Bostrom, 2014, Superintelligence: Paths, Dangers, Strategies (Oxford: OUP), Chapter 1.
[iii] British Government, 2022, Defence Artificial Intelligence Strategy; Jieruo Li, 2022, ‘Artificial Intelligence Technology and China’s Defense System’, Journal of Indo-Pacific Affairs 5, no. 2; U.S. Government, 2022, Responsible Artificial Intlligence Strategy and Implementation Pathway; Government of France, 2019, Artificial Intelligence in Support of Defence.
[iv] ‘Meaning of complex in English’, Cambridge Dictionary (online), accessed 3 September 2020, at: https://dictionary.cambridge.org/dictionary/english/complex; C Paul, CP Clarke, BL Triezenberg, D Manheim and B Wilson, 2018, Improving C2 and Situational Awareness for Operations in and through the Information Environment (Santa Monica, CA: Rand National Defense Institute).
[v] ‘Cyber Defence’, NATO website, at: www.nato.int/cps/en/natohq/topics_78170.htm
[vi] L Ablon, A Binnendijk, QE Hodgson, B Lilly, S Romanosky, D Senty and JA Thompson, 2019, Operationalizing Cyberspace as a Military Domain: Lessons for NATO (Santa Monica, CA: Rand Corporation), at: www.rand.org/pubs/perspectives/PE329.html
[vii] For an understanding of the status of AI and how it is likely to converge with many other emerging technologies, all of which are only accelerating the development of one another, see the excellent book by Peter H Diamandis, 2020, The Future Is Faster than You Think: How Converging Technologies Are Transforming Business, Industries, and Our Lives (Simon & Schuster).
[viii] Alex Joske, 2020, The Party Speaks for You: Foreign Interference and the Chinese Communist Party’s United Front System (Australian Strategic Policy Institute), 9.
[ix] ‘Speaker Pelosi’s Taiwan Visit: Implications for the Indo-Pacific’, CSIS website, 15 August 2022, at: www.csis.org/analysis/speaker-pelosis-taiwan-visit-implications-indo-pa…
[x] C Thi Nguyen, 2020, ‘Echo Chambers and Epistemic Bubbles’, Episteme 17, no. 2; Ludovic Terren and Rosa Borge-Bravo, 2021, ‘Echo Chambers on Social Media: A Systematic Review of the Literature’, Review of Communication Research 9; Seth Flaxman, Sharad Goel and Justin M Rao, 2016, ‘Filter Bubbles, Echo Chambers, and Online News Consumption’, Public Opinion Quarterly 80, no. S1.
[xi] ‘“Zero-day” is a broad term that describes recently discovered security vulnerabilities that hackers can use to attack systems. The term “zero-day” refers to the fact that the vendor or developer has only just learned of the flaw—which means they have “zero days” to fix it. A zero-day attack takes place when hackers exploit the flaw before developers have a chance to address it.’ (‘What Is a Zero-Day Attack?—Definition and Explanation’, Kaspersky, at: https://usa.kaspersky.com/resource-center/definitions/zero-day-exploit)
[xii] Richmond Lattimore and Leonard Baskin (trans.), 1962, The Iliad of Homer (CUP Archive).
[xiii] Zi Sun, The Art of War: Sun Zi’s Military Methods, trans. Victor H Mair, 2007 (New York: Columbia University Press).
[xiv] U.S. Army Chief of Staff, Field Manual 3-0, 1-22 and 2-2, 2022.
[xv] C Paul, C Clarke, M Schwille, J Hlávka, M Brown, S Davenport, I Porche and J Harding, 2018, Lessons from Others for Future U.S. Army Operations in and through the Information Environment (Santa Monica, CA: RAND Corporation), at: www.rand.org/content/dam/rand/pubs/research_reports/RR1900/RR1925z1/RAND_RR1925z1.pdf
[xvi] Colin S Gray, 2007, Another Bloody Century: Future Warfare (Weidenfeld & Nicolson).
[xvii] Ben Hatch, 2019, ‘The Future of Strategic Information and Cyber-Enabled Information Operations’, Journal of Strategic Security 12, no. 4.
[xviii] Joshua E Kastenberg, 2007, ‘Tactical Level PSYOP and MILDEC Information Operations: How to Smartly and Lawfully Prime the Battlefield’, Army Lawyer 61.
[xix] J DiDonato, A Barker and J Schwartz, 2019, ‘Psychological Operations in Support of Fires’, Fires Bulletin, March–April, at: https://sill-www.army.mil/firesbulletin/archives/2019/mar-apr/articles/…
[xx] Ibid.
[xxi] W Altman, 2018, ‘Memes that Kill: The Future of Information Warfare’, CBInsights Research Briefs 3.
[xxii] Supasorn Suwajanakorn, Steven M Seitz and Ira Kemelmacher-Shlizerman, 2017, ‘Synthesizing Obama: Learning Lip Sync from Audio’, ACM Transactions on Graphics 36, no. 4.
[xxiii] Thomas Paterson and Lauren Hanley, 2020, ‘Political Warfare in the Digital Age: Cyber Subversion, Information Operations and “Deep Fakes”’, Australian Journal of International Affairs 74, no. 4.
[xxiv] Microsoft, n.d., ‘VALL-E’, at: https://valle-demo.github.io/
[xxv] J Thies, M Zollhöfer, M Stamminger, C Theobalt and M Nießner, 2016, ‘Face2Face: Real-Time Face Capture and Reenactment of RGB Videos’ (paper presented at the IEEE Conference on Computer Vision and Pattern Recognition).
[xxvi] Sanghyeong Yu and Kwang-Hee Han, 2018, ‘Silent Chatbot Agent Amplifies Continued-Influence Effect on Misinformation’ (paper presented at the CHI Conference on Human Factors in Computing Systems).
[xxvii] R Chesney and DK Citron, 2018, Disinformation on Steroids: The Threat of Deep Fakes (Council on Foreign Relations).
[xxviii] D Rekhi, ‘AI Chatbots like Bard, ChatGPT Stoke Fears of Misinformation Nightmare’, The Economic Times, 24 February 2023.
[xxix] U.S. Army Chief of Staff, Short Field Manual 3-0, 1-20 and 1-21.
[xxx] Stephen L Deterding and Blake Safko, 2018, Bits and Bullets: Cyber Warfare in Military Operations (Monterey, CA: Naval Postgraduate School).
[xxxi] Ibid.
[xxxii] Symantec, 2019, Internet Security Threat Report 24, at: https://docs.broadcom.com/doc/istr-24-2019-en
[xxxiii] R Roy and B Dempsey, ‘Commanding in Multi-Domain Formations: Vision 2050 Warfighter Cyber-Security, Command and Control Architecture’, Small Wars Journal, 24 July 2017, at: https://smallwarsjournal.com/jrnl/art/commanding-in-multi-domain-format…
[xxxiv] A Dowse and S-D Bachmann, ‘Explainer: What Is “Hybrid Warfare” and What Is Meant by the “Grey Zone”?’, The Conversation, 17 June 2019, at: https://theconversation.com/explainer-what-is-hybrid-warfare-and-what-i…
[xxxv] Christopher Telley, 2018, The Influence Machine: Automated Information Operations as a Strategic Defeat Mechanism, The Land Warfare Papers No. 121 (Arlington, VA: The Institute of Land Warfare).
[xxxvi] M Murphy and J Templin, ‘REPLIKA: This App Is Trying to Replicate You’, Quartz, 2017, at: https://classic.qz.com/machines-with-brains/1018126/lukas-replika-chatb…
[xxxvii] ‘What Is Conversational AI?’, IBM website, n.d., at: https://www.ibm.com/topics/conversational-ai
[xxxviii] Sebastian Junger, 2016, Tribe: On Homecoming and Belonging (Twelve).
[xxxix] Katarina Kertysova, 2018, ‘Artificial Intelligence and Disinformation: How AI Changes the Way Disinformation is Produced, Disseminated, and Can Be Countered’, Security and Human Rights 29, no. 1-4: 55–81.
[xl] Such advances in computing power include the possibility of achieving a quantum computer. For more information, see ‘What Is Quantum Computing?’, IBM website, n.d., at: www.ibm.com/quantum-computing/learn/what-is-quantum-computing/
[xli] Alex S Wilner, 2018, ‘Cybersecurity and Its Discontents: Artificial Intelligence, the Internet of Things, and Digital Misinformation’, International Journal 73, no. 2.
[xlii] S Witt, ‘The World-Changing Race to Develop the Quantum Computer’, The New Yorker, 12 December 2022, at: www.newyorker.com/magazine/2022/12/19/the-world-changing-race-to-develo…
[xliii] Albert Palazzo, 2021, Planning to Not Lose: The Australian Army’s New Philosophy of War, Australian Army Occasional Paper No. 3 (Australian Army Research Centre), at: https://researchcentre.army.gov.au/library/occasional-papers/planning-n…
[xliv] Ibid.
[xlv] Lim Meng Whye, 1975, Economic and Military Relations between Australia and Singapore: With Special Reference to the Period 1941–59 (The Australian National University), 129; Ina Mary Cumpston, 2001, Australia’s Defence Policy 1901–2000 (IM Cumpston), 104–107; Minister for Defence, 1953, Strategic Basis of Australian Defence Policy, 11; Defence Committee, 1956, Strategic Basis of Australian Defence Policy, 2; Defence Committee, 1959, Strategic Basis of Australian Defence Policy, 9–10; Defence Committee, 1963, Strategic Basis of Australian Defence Policy, 7–8; Defence Committee, 1964, Strategic Basis of Australian Defence Policy, 16.
[xlvi] For more on this broad topic, see Michael Pillsbury, 2016, The Hundred-Year Marathon: China’s Secret Strategy to Replace America as the Global Superpower (St. Martin’s Griffin).
[xlvii] Australian Army, 2022, Robotic & Autonomous Systems (RAS) Strategy.
[xlviii] Ibid.
[xlix] P Layton, ‘The ADF Could Be Doing Much More with Artificial Intelligence’, The Strategist, 26 July 2022, at: www.aspistrategist.org.au/the-adf-could-be-doing-much-more-with-artificial-intelligence/
[l] Australian Army, Short Robotic & Autonomous Systems (RAS) Strategy.
[li] National Academies of Sciences, Engineering, and Medicine, 2019, Quantum Computing: Progress and Prospects (Washington, DC: The National Academies Press), 14–16.