Skip to main content

Disinformation and misinformation campaigns - the Australian context

Australian flag with light blue electronic waveforms in background

Globally, concern over authoritarian regimes using information technology to control their populations is on the rise. Less well publicised is how the distorting power of the new media is proving comparably potent beyond such countries. Disinformation and misinformation campaigns are powerful tools for creating behavioural change in democracies, offering unprecedented influence opportunities to those who understand their power. While these campaigns occur in the virtual world, they can have profound impact in the real world, manifesting as confusion and chaos, a loss of focus, and increased polarisation, frustration and apathy.

Australia has not yet been exposed to these phenomena on the scale we have seen overseas. But it is happening. During the 2019/2020 Australian bushfires various social media platforms were leveraged to generate and support a false narrative, highlighting the risk to the Australian public of disinformation and misinformation. For example, trolls and bots were active on Twitter, attempting to undermine or deny the link between the bushfires and climate change by hijacking the hashtags #climateemergency #climatecriminals to retweet the hashtag #arsonemergency. At the same time, conservative news outlets and radio announcers who seemingly thrive through their contributions to misinformation and disinformation were amplifying the links between arsonists and the bushfires, misrepresenting the data, and downplaying any link with climate change. As people began to buy into the false narratives, they spread and reinforced them, effectively working with, even taking over from, the bots and trolls. 

COVID-19 provides a particularly alarming example of the realities and dangers of misinformation and disinformation, or what the World Health Organisation has labelled, an ‘infodemic.’ Perpetrators and motives varied, ranging from State-led efforts to downplay the extent of the virus, through to conspiracy theories and false claims about medical cures. All of these narratives worked against leaders and authorities making timely and effective decisions which evidently would have avoided many deaths. At the time of writing, it remains unclear what the agendas of many influence actors may have been, but it does seem apparent that the level of confusion that has been generated may reduce the eventual accountability of decision-makers in some countries.

There is nothing new in the use of information campaigns aimed at over-riding the master narrative, creating chaos, panic or confusion and even changing the narrative. Various nations have been doing it as a cornerstone of their psychological warfare activities for decades, including in the context of a global health crisis as highlighted by Operation Infektion / Operation Denver. However, the ubiquity of technology has changed the potency of these campaigns, and we are now seeing Mexican drug cartels, autocratic leaders, ‘social influencers’, and even political campaigns leveraging misinformation and disinformation to control and manipulate populations. 

The first line of defence is to accept this is happening and understand the seriousness of their potential effect on society. These campaigns are not just an irritant, they can be threats to national security and should be treated as such. They can result in fear, anxiety, and emotional contagion, undermining cognitive resilience, and forcing those with divergent perspectives beyond mutual disdain into anger and hatred, resulting in instability. For example, racism is a great social divider and the emotions that trigger racist attitudes, particularly fear, are very easy to exploit. Here in Australia the national responses to deal with these campaigns remain embryonic. With minimal effort by those who are practiced in this type of campaigning, #invasionday or similar protests could be exploited to the point where they cause political and economic disruption.

The next step is more difficult. Technology provides some solutions for identifying a campaign, for example on Twitter it is possible to detect if a tweet is generated by a bot, using tools such as tweetbotornot, Botometer, and Bot Sentinel. But these tools do not provide all the solutions, and some of their metrics would discount legitimate accounts. Additionally, simply refuting false information will not prevent people from believing it: assuming the refutation is seen by the target audience, it is human nature to believe information that confirms pre-existing beliefs, whether it is factual or not. Social media platforms have a role to play in fact checking information contained on their platforms, the pressure on them to respond is increasing, and they are gradually doing more, at least as it relates to COVID-19 and ‘anti-vaxxers’. But it is not enough.

The European High Level Expert Group on Fake News and Online Disinformation (HLEG) report on how to counter online disinformation and misinformation campaigns provides a series of recommendations for short and long-term responses. These responses are built on five pillars, including increasing transparency, promoting media and information literacy, and empowering users and (trusted) journalists. 

The current responses to disinformation and misinformation do not adequately consider the future threat. The COVID-19 ‘infodemic’ highlights how misinformation and disinformation evolved with the spread of the disease, from a ‘strategic’ false narrative stoking general health fears to more localised ‘tactical’ narratives aimed at inflaming specific social and political divides within and amongst European countries. Notwithstanding the known limitations of AI algorithms and famous failures in attempts at online interactive machine learning, a recent article by the Atlantic Council describes a series of futures made possible by the integration of AI systems into machine-driven communications tools used for propaganda purposes (MADCOMS).

It is easy enough to imagine how these scenarios of MADCOM supported disinformation might play out. For example, in the context of the COVID-19 infodemic these localised ‘tactical’ divides in Europe would have been identified and exploited by MADCOMS long before the disease reached the country, heightening emotions, fuelling panic and increasing the difficulty of containment efforts. As it currently stands, misinformation and disinformation are driven by humans with a specific purpose, be that mischief, profit or oppression. Understanding the purpose of the campaign supports undermining it, weakening the effect of the false narratives on the larger populace. The MADCOMS future is one in which machines will be manipulating humans, with no identifiable purpose, rendering coherent and effective responses all the more challenging.

It is not the campaigns that are the biggest problem. They are symptomatic of societal problems and they would not work if there was no underlying sentiment to exploit. The more unstable a society, the larger the social divides, the more disenfranchised particular groups feel, and the easier it is to exploit the population. In this fact there is potentially good news: by identifying what elements of a campaign are particularly successful, we are able to identify which societal problems are the greater threats to our domestic and regional stability. This information can be used to inform both the military and the Whole of Government approach during times of instability to prepare preventative measures, or in times of crisis if we have missed the window of opportunity for prevention.

The views expressed in this article and subsequent comments are those of the author(s) and do not necessarily reflect the official policy or position of the Australian Army, the Department of Defence or the Australian Government.

Using the Contribute page you can either submit an article in response to this or register/login to make comments.