Speech - Advanced Computing and Warfare
Speech given at the Synergia Conclave, Bangalore, India, 18 November 2023
[Editorial note: This speech has been edited and condensed for clarity.]
I am going to discuss the ways advanced computing might change warfare, and the ways it might not. My aim is to recommend where advanced computing applications are likely to get the biggest return on investment. In order to avoid the trap of prediction, I intend to look at the past—particularly the recent past—to give us some sense of how best to focus the application of advanced computing to warfare so that we are best prepared for the future, whatever it might be. After all, the future is irreducibly unknowable.
My address is principally a caution for those who think that the potential of advanced computing in warfare is greatest in the area of command and control—specifically improved decision-making. My thesis is that war is a practical phenomenon. It is about doing more than it is a cognitive endeavor. Therefore, applications for advanced computing—that enable better doing—are likely to have a more substantive influence on future warfare than applications that enable better command and control.
To frame my address a little, I will use the term advanced computing to refer to a range of things, including AI, machine learning, quantum computing and the like. In this respect, I should note that I come to this topic as an army officer with expertise in warfare rather than expertise in computing. I also avoid the subject of cyber warfare. I do so because there are essentially only three war-related things you can do in cyberspace: espionage, sabotage and subterfuge, and of course their counters. While these things have been features of war from the very beginning, they have never been particularly decisive. That tendency is unlikely to change just because those things now take place predominantly in cyberspace. So I am steering clear of cyber.
Let us begin our exploration by casting our minds back to the 1990s. It was the decade of Fukuyama’s ‘end of history’. At the beginning of the decade, in the process of liberating Kuwait, America’s rejuvenated post-Vietnam army and air force had virtually annihilated Saddam Hussein’s army. To some, the American success was evidence that developments in precision munitions and information technologies had changed the very nature of war. It seemed that the side able to take full advantage of the new information systems could succeed in battle and win wars with reduced risk to the safety of its troops. It inspired the theory of network-centric warfare.
At the time, net-centric theorists like Vice Chairman of the American Joint Chiefs of Staff Admiral William Owens made bold claims, including that technology could enable US military forces in the future to lift the ‘fog of war’ itself. He claimed: ‘Battlefield dominant awareness—the ability to see and understand everything on the battlefield—might be possible.’ He also observed that:
When you look at areas such as information warfare, intelligence, surveillance, reconnaissance and command and control, you see a system of systems coming together that will allow us to dominate battlefield awareness for years to come … it suggests we will dissipate the ‘fog of war’.
To some extent, Owens’s prediction was true. It is probably fair to say that the US did dominate in those fields for at least the next decade. Indeed, the reconnaissance strike systems that Owens anticipated have certainly come to play a significant role in contemporary warfare. In Ukraine, for example, the battlefield is so saturated with sensors, particularly drones, that almost nothing can take place unobserved. Those sensors are linked up with digital command and control systems and responsive long-range fires. Concentrations of troops are easily discovered and, once discovered, they are just as easily and quickly destroyed. Also, advanced sensors, coupled with autonomous explosive boats and long-range anti-ship missiles, have made Russia’s Black Sea fleet largely redundant. Furthermore, local air superiority in the war in Ukraine seems to be as much a function of ground-based air defence systems as it is a traditional function of airplanes.
Despite these advances, or perhaps because of them, the Ukrainian Commander-in-Chief recently observed that the Eastern Ukraine front is in a state of stalemate. Over the summer of 2023, Ukrainian land forces advanced just 17 kilometres at most, and only in a few select places.
Rather than providing decisive advantage to one side or another, the new reconnaissance strike systems have so strengthened ‘defence’ over the ‘offence’ in war that the net result seems to be quite disappointing indeed. Its features are trenches, futile attacks, stalemate, indecisiveness, attrition, and long wars with no clear path to victory. The outcomes of advances in the information age appear to be a reversion to early 20th century land warfare. Indeed, the scenes coming out of Eastern Ukraine resemble the Western Front in 1916 and Stalingrad in 1943 rather than some imagined future like science fiction.
The Ukraine war is not the only recent example of this phenomenon. Take for example the war to defeat ISIS in northern Iraq. Even with complete air superiority, and a remarkable overmatch in space and in the electromagnetic spectrum, it took months and thousands of Iraqi infantry before the Iraqis were able to force ISIS back into Syria. The more recent Israeli incursion into Gaza is perhaps a further illustration.
Like the failed promise of air power theory, the predictions of Owens and the other network-centric theorists founder on a few similar flawed assumptions about war. For the air power theorists, the flawed assumptions were that the bomber would always get through and that, having got through, its bombing would have the effect of decisively demoralising the civil population. As for the network-centric theorists, they assumed that the sensors would deliver perfect knowledge and awareness of the battlefield and that, having achieved perfect awareness, it would offer a marked advantage in decision-making quality and speed. These assumptions are flawed in both cases because they derive from a mechanistic sense of the battlefield and war.
Clausewitz observed that war is akin to a duel. It is a physical and dynamic thing—a function of doing rather than thinking. It is something considerably more complex than simply action, reaction, and counter action. When Admiral Owens made his extraordinary claims about the effect of information technologies on warfare, he had assumed that only his side would have the advantage offered by the network. Like his air power counterparts, he imagined the adversary as largely passive—a collection of targets and nodes to do things to. He hadn’t anticipated a future in which both sides could achieve a similar level of technological advancement and battlefield transparency. It is perhaps for these reasons that, despite such extraordinary technical advances in recent decades (particularly informational developments) warfare today (at least warfare on land) still looks much like the wars of the middle and early 20th century.
So what can we learn from the air power and network-centric case studies? What do these examples say about our grand ideas for how advanced computing might improve how we wage war? To answer those questions, let us remind ourselves of some of the grand expectations people have of advanced computing in war. This first quote comes from a 2021 global trends analysis report about the future battlefield, issued by the office of the American Director of National Intelligence. It captures a fairly frequent refrain about future warfare:
The future of warfare is likely to focus less on firepower and more on the power of information and the way it connects a military’s forces through the concepts of command, control, communications, computers, intelligence, surveillance, and reconnaissance.
The second quote is from Dr Michael Richardson, an Australian researcher in political violence and emerging technologies at the University of New South Wales:
There is a move towards killing that is intensely predictive … We will have the technological capacity in many instances to take human decision making out of the [killing] process and to push [computer] predictions to the forefront.
The People’s Liberation Army is making similar claims about the effect of advanced computing on future warfare. Chinese strategists claim that artificial intelligence’s value for decision-making will cause future warfare to become a competition over which state can produce computers that have the quickest computing capacity. They claim that wartime commanders will be armed with supercomputers that will come to surpass the decision-making abilities of the humans directing them—what the PLA calls algorithmic warfare. PLA strategists predict that frontline combatants will be gradually phased out and replaced with intelligent swarms of drones that will give operational-level commanders complete control over the battlefield. They expect that over time, the tactical level of warfare will be composed almost entirely of robots, and war will become largely a game. These recent claims are very similar to the claims of Owens and the network-centric theorists in the 1990s. All the claims, both Western and Chinese, are heavily premised on an assumption that knowledge and decision-making are of critical importance in warfare. I think it is a bad assumption to a point. Let me explain.
Reconnaissance strike systems, particularly, are having a profound effect on warfare. In one sense, they have given age-old battlefield features like fortifications, stalemate and attrition new leases on life. For example, artillery and landmines are proving as important as ever in eastern Ukraine. In another sense, they have caused us to ask some hard questions about warfare, including questions about whole domains of warfare. For example, the range and accuracy of modern armed drones and missiles, coupled with ubiquitous sensors, is posing some profound questions about the conduct of war at sea. It is already probably possible to exercise sea denial over vast swathes of the ocean from the land with missiles and drones alone. And we still don’t quite fully understand why air power has not been as decisive as expected in Ukraine. Importantly, all these emerging changes to warfare have only a tangential relationship to command control and decision-making.
So let us look specifically at the expectations of advanced computing for command and control and decision-making in warfare. The main expectation is that advanced computing will improve both the quality and the speed of decision making. It promises to sift through enormous amounts of battlefield data in the blink of an eye and, on the basis of that data, come up with plans and solutions for things like attack and defence. It might even be able to predict what an enemy will do, enabling preemption. It promises to take all the tracks of the myriad targets on the battlefield—including ships, planes, radars, headquarters, air defence systems and the like—and (on the basis of an awareness of one’s own target priorities, the battlefield situation, the available munitions, and the readiness of the many shooters) apply the best munition, from the most appropriate weapon system, to the most appropriate target. From these promises comes the grander promise: that these results will offer great battlefield advantage, or even a war-winning advantage, to one side or another. There are two key elements to these claims—and both are doubtful. The first is that advanced computing will deliver the expected change in quality and speed of decision-making and targeting. The second is that, should that change occur, it will have a marked influence on warfare. Let us consider the latter element first.
The idea that improved decision-making enabled by advanced computing will have a marked influence on warfare is based on a common fallacy. That fallacy, as previously mentioned, is to overestimate the importance of cognitive command and control (C2) functions like thinking, knowing and deciding in warfare. Knowing your enemy’s intentions and dispositions certainly has its advantages, but knowing what your enemy is doing, and is about to do, is not in itself decisive. What one does about the knowledge of what their enemy is doing, and about to do, is what is decisive. This is because war is about doing. The late British historian John Keegan substantiated this thesis in his seminal book entitled Intelligence in War. He used the Battle of Crete in World War II as one of four case studies. In short, having cracked the German Enigma codes, the Commonwealth forces knew the Germans’ plans entirely—the size of the forces, where they were landing, what time they were landing, and whether by parachute or amphibious landing. Despite that knowledge, and despite numerical superiority over the German attackers, General Freyberg’s defenders lost Crete decisively. The Germans won because of action and intent vice knowledge.
There are other similar cases in war. George McClellan, who famously obtained the campaign plans of Robert E Lee before the Battle of Antietam, failed to incorporate the intelligence into his plan and nearly lost. It was the Union soldiers who staved off disaster at points in the battle like ‘Bloody Lane’ who ultimately won the day. They did so largely despite their commanding general, not because of him. To that end, can we imagine how the decision-making support of advanced computing could have delivered a different result in Malaya and Singapore in 1942? It seems unlikely. Would it have made the hard-fought battles for the Pacific island atolls any easier? Probably not, because the factors that led to those victories and losses were myriad, and most had little to do with battlefield decision-making.
The point is that knowing and deciding is less than half the battle, and probably significantly less. This feature of war explains, in part, why such a decisive technological advantage over ISIS in Iraq did not translate into a quick and easy victory, for example. The same goes for maritime warfare too. Investing in quicker and quicker decision loops might be pointless if sea denial can be effected from the land. Other innovations, like using many small, cheap, fast and rapidly reproducible boats might constitute the better part of the response to this circumstance. So we should perhaps be circumspect about the effects of advanced computing on the function of command and control. In fact, careless investment in advanced computing in an effort to get some sort of decisive battlefield decision-making advantage actually has some serious risks.
One risk is that risk-averse commanders start to use advanced computing as a crutch for making decisions—to, in effect, subcontract the responsibility for decisions to computer algorithms. It is human nature, particularly in a certain kind of commander. Advanced computing offers an alluring but potentially illusory kind of due diligence. We can imagine how a commander might be reluctant to make battlefield choices that are not consistent with computer-generated options and recommendations, because we can imagine inquiries asking future commanders why they went against the advice of a decision-support algorithm. There is already evidence of this predisposition of commanders to seek certainty or assurance from process and third parties for their decisions.
In a paper authored by Dr Leanne Rees, Colonel Grant Chambers and me a few years ago, we found that the Australian Army was putting too much emphasis on quantifiable, procedural and informational aspects of headquarters and staff functionality. We found that greater and greater investment in these procedural and informational aspects of headquarters produced diminishing returns on investment. Headquarters were not improving despite greater and greater effort and investment in C2 systems. New C2 systems seemed to have no consequence for the effectiveness of headquarters, and headquarters were growing bigger and bigger at the same time. On the flip side, we found that effort put into broadening the battlefield experience base and the expertise of talented individuals for future command was likely to result in markedly better headquarters performance.
The commander’s role in a headquarters is profound. We observed that many commanders were tending to become approvers and amenders of staff solutions and staff plans, keeping themselves somewhat at arm’s length from the circumstances of the battlefield, and the detail of planning—acting more like staff-course instructors than commanders. We also found that a focus on the staff, information and procedures tended to deprive the commander of firsthand knowledge and experience of the battlefield. It also deprived the headquarters staff of the advantage of the commander’s intuition, experience and talent. Another related finding was that talented and experienced commanders tended to rely on only a few pieces of information to make good battlefield decisions, and that it was impossible to know what these few pieces of information might prove to be before the battle commences. This finding corroborated British defence analyst Jim Storr’s assertion that decision-making in battle is not information intensive, but information sensitive. In other words, there is little evidence to support the idea that lots and lots of data will lead to better battlefield decisions. Indeed, we found that decisions only needed to be near enough to be good enough.
The reality is that a perfect decision gives very little advantage over a near-enough decision in battle. This proposition is supported by General Erich von Manstein’s admonition ‘the larger the headquarters, the worse the command’. Regardless of the quality and speed of a recommendation from some advanced algorithm, commanders still need the courage or the nerve to act on that recommendation. The commander must still accept the associated risks, and the commander must accept the associated loss of life and materiel. Knowing what one should do is one thing; having the courage or the nerve to go through with it is entirely another. Again, war is about doing, and it is a social activity.
The second serious risk is cultural. It relates to a correlated risk of developing highly centralised doctrines for command and control to take advantage of advanced computers in decision-making. This risk relates particularly to targeting, and the potential of advanced computing to connect everything up perfectly. The idea is that if you can have an all-knowing computer brain that can see all the targets on the battlefield and has an awareness of the state of all the potential shooters, and if the brain can very quickly and efficiently assign the best shooter to the best target, then logically you don’t need to delegate decisions for striking these targets to subordinates. Such assumptions ignore the strong possibilities that the network won’t always function perfectly, that the all-knowing computer brain won’t always know everything and, indeed, that it has the potential to be spoofed or to have its data corrupted. If we assume the systems works perfectly, then it brings into question the whole Western theory of delegated command and mission command. Why allow commanders at any level below the supreme commander any autonomy for battlefield decisions if the advanced centralised computer brain can make all the decisions better and faster?
Central Western tenets of command and leadership, including things like initiative, responsibility, degrees of autonomy, bias for action, risk acceptance, and the obligation to disobey orders made irrelevant by circumstances, would all become redundant (even counterproductive) under a system underwritten by an advanced computerised brain. Needless to say, some care is therefore warranted in pursuing a technical solution to the problem of command and control in warfare. An investment in a command and control solution that anticipates a decisive reduction in the fog of war is fraught.
So there you have it. Hopefully I have made a plausible case for some circumspection about the relative merits of advanced computing for battlefield command and control. The bottom line is that war is a practical and dynamic phenomenon. It is about ‘doing’ more than it is about thinking and deciding. To that end, I think advanced computing applications that enable better doing are likely to have a more substantive influence on future warfare than technologies that enable better decision-making.
Successful armed forces tend to be those that better overcome the new problems created by technological advances. The solutions to overcome the limits of new technologies normally relate to procedural, doctrinal and social adaptations rather than further technological advances. After all, while Napoleon and Frederick the Great are both universally considered to have been geniuses, their genius was much more than their coup d’oeil and astute decision-making on the battlefield. To some extent they both preordained their battlefield successes with their focus on institutional factors such as rigorous training, the selection of marshals, the creation of staff schools, new social arrangements, logistics, new doctrines, the levée en masse, and many other similar institutional factors. These factors are the kinds of things that preordain a high quality of action in war. As such, applying advanced computing to the problem of ensuring high-quality action is likely to get a greater return on investment than applying it to quicker and better battlefield decision-making.
ABOUT THE AUTHOR
Major General Chris Smith is the Deputy Chief of the Army. Prior to his taking up of this position, he served as the Deputy Commanding General—Strategy and Plans for the US Army Pacific (USARPAC) located on Fort Shafter, Hawaii. His other senior appointments include the Director General Land Operations (G3) of the Australian Army, the Director General Strategic Planning—Army (now Director General Future Land Warfare), Director Plans—Army, Chief of Staff to the Chief of the Defence Force, and Chief of the Defence Force’s Liaison Officer to the Chairman of the Joint Chiefs of Staff.
His operational experience includes Commanding Officer, 2nd Battalion Battle Group, Afghanistan, 2011; Operations Officer, 2nd Battalion Battle Group, Iraq, 2006; United Nations Military Observer, Lebanon and Golan Heights, 2002; and Platoon Commander, United Nations Assistance Mission, Rwanda, 1995. Major General Smith also served as the Commander Landing Force for the newly formed Australian Amphibious Ready Element in 2013.