Skip to main content

Artificial Intelligence in Army, Part 1: The Trouble with AI

Artificial Intelligence in Army, Part 1: The Trouble with AI

Artificial Intelligence (AI) offers exciting opportunities for Army; however, there are also significant limitations. One such limitation is the inherent biases. AI systems are only as good as the input data, and outcomes can be corrupted by ‘bad data’ that contains implicit racial, gender or ideological biases. There are more than 180 human biases, each of which can affect how decisions are made by the coder, and subsequently by the machine. Like bias in human society, the bias in AI must be actively interrupted. This is not a simple process: AI is attractive because humans believe it is objective and rational, and organisations generally do not have the experience to interrogate the algorithms used to build the AI system.  

Even if the Army thought to ask and interrogate the algorithms, many developers of AI do not give access to the ‘black box’, that is to say, the technology, claiming proprietary information. This means there are limited opportunities for the military to understand the dataset being used. Furthermore, there is an opacity in machine learning, making it difficult to identify which features of the data-input the machine used to make a particular decision, and therefore where in the code the bias existed.

A simple example of implicit bias in AI is in the first wave of virtual assistants that reinforce sexist gender norms: assistants that perform basic tasks (Siri, Alexa) have female voices, the more sophisticated problem-solving bots (Watson, Einstein) have male ones. Additionally, studies have demonstrated sexism in the word association of AI. For example, ‘man’ is associated with boss, president, leader and director, whereas ‘woman’ is associated with helper, assistant, employee and aide.

Facial recognition algorithms highlight additional concerns. These algorithms learn how to calculate the similarity of faces by using pre-existing training sets of faces. When these inputs are not varied, the machine struggles to identify faces that do not match the data-set. This seems obvious, however racial bias in facial recognition software is a problem that frequently leads to inaccuracies in identification. A 2018 study published by researchers at MIT found that error rates were up to 35 percent higher when detecting the gender of dark-skinned women compared to lighter skinned men. A May 2018 report on facial technology in use by the UK police force found that on average the false face recognition was 95% across the UK. The worst results were found in instances where the AI was attempting to identify individuals who were not Caucasian: here, the results indicate a 98% failure rate.

When MIT graduate, Joy Buolamwini, famously discovered that facial recognition technology was unable to recognise her face, she decided to look at the problem in more detail. In Coded Gaze Joy discusses how many computer vision projects share the same code and implants, therefore the bias of the original encoder propagates more widely than just the one program. And it is not only the biases of white male coders that find their way into algorithms. Algorithms developed in China, Japan and South Korea accurately recognised East Asian faces more readily than Caucasians.

What does this mean for the Army? The global video surveillance market is growing, with security forces a prime target for developers. Consider the use of AI and facial recognition technology to improve physical security at a Forward Operating Base in Afghanistan, by a routine patrol in Iraq, or to screen potential candidates for a local security force being trained by the Australian military. Or the use of Automated Weapon Systems (AWS) with the capability to use facial recognition technology to identify legitimate targets. Even with maintaining ‘humans in the loop’, the failure rates due to biased algorithms are cause for significant concern.

Another consideration is the fact that machine learning systems reduce or remove the impact of outliers in the training data. In the context of military operations, machine learning can be used to teach the AWS about which groups are legitimate targets. Beyond the issues with inaccurate facial recognition, if the outliers in this context represent a minority group that is not a legitimate target for military actions, and the AWS has removed the outliers, this group will not be captured in the ‘do not target’ data set and therefore could be potentially mistaken for legitimate targets.

Any reliance on this existing technology would be dangerous, to the Army and to the civilians we are bound to protect. Additionally, knowing that the bias and other issues exist is only the first step. As noted above, interrupting bias in AI is not a simple process. Even tech giants Google and Microsoft have confessed to having no success in finding solutions to their bias problems.


The views expressed in this article and subsequent comments are those of the author(s) and do not necessarily reflect the official policy or position of the Australian Army, the Department of Defence or the Australian Government.

Using the Contribute page you can either submit an article in response to this or register/login to make comments.