Skip to main content

The Profession of Arms in an AI-Enabled World

Journal Edition
DOI
10.61451/220105

Abstract

This article explores how artificial intelligence (AI) is poised to reshape multiple aspects of military practice—tempo, teaming, and decision cycles—and how that may affect the profession of arms. Drawing on contemporary Australian and international military doctrine, recent conflicts, and academic literature, it reassesses four professional dimensions in an AI context: expertise; ethics and accountability; identity and culture; and self-regulation. The article argues that while AI may disrupt aspects of the character of war, it does not alter its fundamental nature: war remains a human endeavour. Military professionals must now integrate technical fluency with traditional judgement, maintain ethical accountability amid algorithmic opacity, preserve trust and cohesion within hybrid human–machine teams, and lead in testing and establishing doctrinal and moral boundaries for AI use. It contends that adapting to AI is not merely a technical challenge but a test for the profession itself. The enduring values of Defence (service, courage, respect, integrity, excellence) remain essential and must evolve to guide the profession through this period of rapid technological change. Ultimately, the article asserts that the profession of arms must shape, not be shaped by, the rise of AI.

Introduction

The advent of Artificial Intelligence in warfare may usher in a new age, as for the first time non-human entities may make decisions about the application of political violence.[1]

Lieutenant General Simon Stuart, Chief of Army

Modern warfare stands on the cusp of a disruptive transformation driven by AI. This prospect raises profound questions about the future of the profession of arms—the vocation of military service and expertise in the ethical application of force on behalf of society. The profession of arms has long been grounded in human judgement, expertise and moral responsibility in war. Yet AI-enabled systems from autonomous drones to decision-support algorithms are rapidly changing how militaries fight and make decisions. What, then, does the profession of arms look like in an AI-enabled world?

This article examines that question by reviewing the traditional foundations of the profession of arms and analysing how AI-driven technological disruption is altering warfare and military decision-making. This analysis revisits four core dimensions of the military profession in light of AI: expertise (the changing nature of military knowledge and decision processes, such as AI-enabled planning); ethics and accountability (new challenges from autonomous systems and ‘black-box’ algorithms, and preserving human responsibility); identity and culture (maintaining trust, cohesion and purpose in human–machine teams); and self-regulation (how the profession can set ethical and doctrinal boundaries for AI use). Finally, the discussion appraises the implications of these changes for the Australian Army’s future professional identity and its relationship with Australian society.

This review aims to critically assess how AI might reshape (but not replace) the profession of arms. In doing so, this paper draws on contemporary sources, including emerging national strategies, allied initiatives like AUKUS and Project Convergence, and scholarly perspectives, alongside insights from Australian Army research on the profession itself.

Definitions and Scope

In this article, artificial intelligence (AI) refers to computational systems that sense, process and generate outputs that support or take decisions—covering both machine-learning (data-trained) and rules-based approaches. Autonomy is the ability of a system to select and execute actions towards commander’s intent without continuous human direction, within constraints and with override. Decision support is used to mean AI that assists a human commander, and decision delegation when a system executes actions once authorised.

Background: the Profession of Arms

The term ‘profession of arms’ encapsulates the military vocation as a specialised field akin to medicine or law, distinguished by unique expertise, ethical responsibilities and an ethos of service. Within the Australian Defence Force (ADF), formal recognition of this concept has only recently crystallised. Australian Military Power (2021, revised 2024) defines the profession of arms as comprising personnel skilled in ‘the ethical application and exercise of lethal force to defend the rights and interests of the nation’.[2] This definition underscores the profession’s core elements: ethical conduct, expertise in controlled violence, and national duty.

Historical thinkers have long shaped our understanding of military professionalism. Lieutenant General Sir John Hackett characterised soldiering as possessing a distinct body of technical knowledge, institutional cohesion, specialised education and a unique societal role.[3] Similarly, Samuel Huntington identified the military profession through its expertise in warfare, a profound societal responsibility, and distinct institutional culture.[4] Both thinkers highlight the profession’s uniqueness in expertly wielding force with the objective of resolving societal problems.

Within the Australian Army, Chief of Army Lieutenant General Simon Stuart has described the profession as resting on three pillars: jurisdiction, expertise and self-regulation.[5] Jurisdiction signifies the Army’s unique service—applying land combat power to advance national interests. Expertise represents the specialised knowledge embodied in the ‘art and science of war’, encompassing tactics, doctrine and advanced military skills. Self-regulation underscores the military’s ability to govern its ethical and disciplinary standards internally, even amid the chaos of warfare. Collectively, these pillars align closely with Hackett’s and Huntington’s principles, affirming the Army’s professional autonomy, distinct knowledge base and ethical obligations.

The core ethos binding these elements together is the notion of ‘unlimited liability’, where soldiers willingly risk their (and take others’) lives. This commitment creates a profound societal contract: in return for their sacrifice, society grants the military professional autonomy, esteem and trust, contingent upon maintaining high standards of integrity, ethical conduct and competence.[6] Military professionals undergo extensive education and rigorous training, developing expertise progressively throughout their careers.

Past technologies—aviation, nuclear weapons, cyber, and precision munitions—reshaped tactics and force design but kept humans clearly in charge of decision-making. AI is different in kind: it invites delegation of decision steps to non-human systems, often via opaque models that act at speed and at scale, which creates new pressures on judgement, accountability and training. Furthermore, AI’s rapid advancement poses pressing questions against the established backdrop: does AI alter the required professional expertise? Does it introduce new ethical challenges and accountability gaps? How must military training evolve to ensure continued mastery of warfare? Will the Army’s identity and cohesion change as humans increasingly partner with autonomous systems? Crucially, how can the profession effectively regulate AI usage to maintain societal trust?

The questions raised by this contemporary technological context provide the opportunity for the Army to assess the full effects AI may have on its professional identity. This reassessment is urgent as militaries worldwide integrate AI systems into operations, requiring the profession to evolve without compromising its core ethical and societal commitments. The Australian Army stands at a critical juncture, challenged to maintain its professional integrity and re-examine its values against the emerging realities of AI-enabled warfare.

Technological Disruption: AI’s Impact on Warfare and Decision-Making

AI’s technological disruption of warfare has been described in some analysis as a ‘third revolution’. It has been compared in impact to gunpowder and nuclear arms, with claims that AI cuts through the fog of war and enables new ways of fighting.[7] However, while AI is reshaping tempo and tactics, the evidence from recent conflicts indicates an evolution of warfare rather than a revolution; true ‘revolution’ requires parallel organisational, doctrinal and cultural transformation—not only new kit.[8] This article therefore treats AI as a catalyst for professional adaptation more than a fait accompli of revolution. Regardless, AI is fundamentally altering both operational and strategic paradigms, prompting militaries worldwide to adapt swiftly.

Autonomous systems such as drones, robotic vehicles and intelligent munitions are already changing battlefields. In practice, most systems fielded today are remotely piloted or operate with bounded (narrow) autonomy within prescribed envelopes (e.g. waypoint navigation, target tracking, terminal guidance) and with human authorisation for effects against persons. Recent conflicts, including the war in Ukraine, illustrate how AI-guided drones and autonomous systems are being employed extensively, providing decisive tactical advantages.[9] These technologies allow for unprecedented speed, precision and operational tempo, fundamentally altering force structures and battlefield tactics. Smaller AI-enabled drones can neutralise larger, expensive crewed platforms, dramatically shifting cost-effectiveness ratios and strategic calculus.[10]

Beyond physical platforms, AI’s most profound impact is arguably in decision-making. Warfare today generates enormous volumes of data, from surveillance feeds to intelligence reports, exceeding human cognitive capacities.[11] Through advanced machine-learning algorithms, AI rapidly processes and interprets this data, significantly compressing the observe–orient–decide–act (OODA) loop, which is essential to military superiority.[12] The Australian Army’s decision-making and planning process (DMPP) exemplifies an area ripe for AI integration, with potential to accelerate decision-making cycles and provide commanders enhanced situational awareness and tactical agility.

International experiments such as the US Army’s Project Convergence highlight AI’s operational potential.[13] Using systems like FIRESTORM, these trials demonstrate AI’s capacity to streamline targeting and coordination processes from minutes to mere seconds, dramatically improving lethality and survivability.[14] Likewise, the multinational AUKUS trials have shown how AI-enabled sensor-to-shooter networks can drastically reduce the time from target identification to engagement.[15] However, these trials emphasise that speed only helps if it rides on quality assurance: fast loops can amplify ‘hallucination’ (confidently presented, plausible but unfounded outputs)[16] or ‘slop’ (large volumes of low-quality content that floods systems).[17] Critically, these trials emphasise the need to retain human oversight, introducing roles like the ‘AI officer’ to ensure accountability.

Despite technological enthusiasm, AI’s strategic implications remain contentious. Critics like AI ethicist Toby Walsh argue that current defence strategies inadequately address AI’s transformative potential, leaving militaries vulnerable to sudden disruptive shifts.[18] Conversely, senior ADF leadership emphasises that while technology changes the conduct of war, the fundamental nature of warfare remains human-centric.[19] Human factors such as leadership, judgement and moral courage remain irreplaceable elements in combat.

While AI may reshape war, the weight of evidence still points to evolution rather than revolution.[20] Autonomous systems are shifting force design and cost curves, as small, cheap, smart platforms become available. The most profound change is in decision-making: machine-learning tools ingest intelligence, surveillance and reconnaissance (ISR) at scale and compress the OODA loop; the Army’s DMPP is primed for this. Strategically, views diverge—some warn that policy and doctrine lag technology; senior ADF voices emphasise that war’s human nature endures. Ultimately, technological disruption via AI provides the Australian Army a significant opportunity to redefine and reinforce its professional identity, ensuring its continued relevance and effectiveness in the emerging AI-enabled era of warfare.

Expertise: AI and the Changing Nature of Military Knowledge

Traditionally, military expertise has encompassed mastery of tactics, strategy, leadership and operational planning, built upon deep, experiential understanding of warfare. However, AI introduces a dramatic shift, demanding that military professionals acquire not only conventional military knowledge but also proficiency in AI technologies, data science and algorithmic calculations.

The Australian Army now acknowledges technical literacy as an essential requirement of modern soldiering, emphasising the importance of understanding AI-driven systems in warfare. Some industry voices push further, as seen in assertions like Shyam Sankar’s provocative statement that ‘warfighters need to know how to code’.[21] His claim underscores an exaggeration of an evolving reality—however, code doesn’t kill; people do. What AI may offer is efficiency and decision advantage, not a substitute for responsibility for lethality or judgement. In this way, software and algorithms are becoming central to military effectiveness, requiring soldiers to understand and manipulate these tools effectively in operational contexts.

Yet integrating technical expertise with traditional military judgement poses significant challenges. Some, like Johnson, highlight that while AI is powerful in handling vast datasets and generating rapid tactical recommendations, it lacks the intuitive grasp of battlefield complexity inherent in human commanders.[22] Military professionals risk ‘deskilling’ if overly dependent on AI-generated solutions, potentially losing their ability to independently analyse, judge and creatively respond to unpredictable battlefield situations. This phenomenon, termed the ‘ironies of automation’, highlights how the use of automation, while enhancing efficiency, can inadvertently erode crucial human skills.[23]

A key approach to mitigating these risks is ‘centaur warfighting’, a model blending human strategic insight with AI’s analytical capabilities.[24] Rather than replacing human decision-makers, AI systems serve as cognitive augmentations, enhancing commanders’ situational awareness and decision-making speed. Experts advocate preserving human judgement as the ultimate arbiter, cautioning against fully delegating strategic decisions to machines.[25] Effective human–machine teaming thus requires military professionals to develop nuanced judgement in leveraging AI, critically evaluating algorithmic outputs and maintaining overall control.

Practical examples underscore this evolving expertise. Initiatives such as the multinational AUKUS trials introduced the ‘AI officer’ role, dedicated to supervising algorithmic decisions within operational contexts. Similarly, research conducted by institutions like the National Defence University of Finland highlights the necessity for specialised officers proficient in AI, supported by broader AI literacy among all military leaders.[26] These developments illustrate how military expertise is shifting towards dual competence: retaining foundational strategic and ethical understanding while incorporating deep familiarity with advanced technological tools.

The Australian Army’s DMPP epitomises this dual expertise requirement. AI applications in DMPP can swiftly generate and evaluate multiple operational scenarios, offering commanders enhanced strategic insight.[27] However, maintaining human oversight and cultivating critical judgement remains paramount in AI-enabled military environments.[28] Military professionals must be adept at interrogating AI-generated recommendations, understanding algorithmic assumptions and identifying potential errors or biases. Education and training must actively develop scepticism and critical-thinking skills, enabling commanders to avoid passively accepting AI outputs. Deliberate exposure to AI limitations through training scenarios where algorithms make plausible yet flawed recommendations is essential for developing this critical judgement.

In summary, the above evidence suggests AI is expanding the skill set that constitutes military expertise, rather than replacing its core. While new technical competencies will matter, they should be developed without eroding the profession’s strategic, tactical and ethical foundations. The Australian Army has the opportunity to cultivate leaders who employ AI judiciously while maintaining independent judgement and critical thinking, informed by military history, ethics and human psychology.[29] Framed this way, AI resembles a transformative tool that enhances (rather than substitutes for or replaces) human strategic ingenuity.

Ethics and Accountability: Autonomy, ‘Explainability Paradox’ and Preserving Human Responsibility

Like earlier disruptive technologies—from aircraft to nuclear weapons—AI pressures the profession’s ethical foundations, but its distinct challenge is the shift of agency from humans to autonomous systems. As machines increasingly participate in battlefield decisions, military professionals must confront new questions. Can a machine ever be trusted to make life-and-death choices? Who is accountable when an algorithm causes harm? How do we preserve human responsibility in a world of autonomous systems and black-box decision-making?

At the heart of military ethics is accountability: humans, not machines, must remain responsible for the use of force.[30] Yet as AI systems become more autonomous, the foundational principle of accountability is under pressure. Close-in defensive systems (e.g. CIWS/Phalanx, SeaRAM) can search, detect, track and intercept incoming materiel threats at machine speed within pre-authorised envelopes with minimal human input.[31] It therefore remains important to separate defensive, object-target autonomy from offensive, person-target autonomy.

The ethical dilemma is stark: can responsibility for offensive, lethal decisions be shared with, or even ceded to, software? Delegating those decisions to software is categorically different: only humans possess moral agency and can bear command responsibility;[32] opaque models cannot evidence compliance with distinction, proportionality and precautions; machines lack reason-responsive judgement (mercy, restraint);[33] and non-person killing erodes public trust, lowering the threshold for force.[34] Furthermore, modern AI systems, particularly those based on machine learning, often operate as black boxes, a factor dubbed the ‘explainability paradox’.[35] Hence, meaningful human control at the point of lethal effect is non-negotiable.

Democratic militaries have attempted to draw a line through the principle of ‘meaningful human control’. But enforcing this is complex.[36] If an AI system recommends a strike, and a human commander rubber-stamps it without understanding the rationale, then that is not truly meaningful control. Under these conditions, humans risk becoming mere supervisors of decisions they cannot fully audit. This is the ‘moral crumple zone’ problem, where human operators absorb blame for outcomes they did not truly control.[37] A better model is risk-aware delegation with independent checks. For example, Australian artillery has chosen to cede some decision-making to computer-generated firing data in order to speed up its OODA loop; however, it requires an independent manual verification against maps and safety traces before execution—retaining command accountability while harvesting efficiency.[38]

It is clear that ethical and legal responsibility cannot be outsourced. The commander remains accountable, regardless of how automated the system. The US Department of Defense’s AI principles reinforce this: personnel remain responsible for all AI use.[39] Commanders must remain in the loop for lethal decisions even as many systems move human involvement to on the loop and, if not governed, carry the risk of humans drifting out of the loop.[40] The ADF should adopt the same uncompromising stance, beyond its current technical analysis.[41] The profession of arms demands ownership of decision-making, and accountability for consequences, especially when life is at stake.

Designing AI systems with traceability and governability is essential. Traceability means commanders and operators must understand how decisions are made, what data was used and what assumptions were applied. Governability means systems must be interruptible, and if an AI behaves unexpectedly, soldiers must have the ability to override it or shut it down. NATO’s AI principles and the US Department of Defense’s AI ethics both require the ability to deactivate systems that show unintended behaviour, [42] and the International Committee of the Red Cross (ICRC) urges timely intervention and deactivation as a legal/ethical safeguard.[43] Reports on the Israel Defense Forces’ ‘Lavender’ show how thin human verification over AI-generated target lists can fuel controversy about accountability and civilian harm,[44] underlining why auditable logs, real overrides, and enforceable decision rights are non-negotiable at the point of lethal effect. Doing this demands complex, critical thinking[45]—professional scepticism, creative red-teaming, and wargaming that forces failures to happen in training, not in contact.

Beyond technical safeguards, the military must actively resist the dehumanisation of war. As AI increases stand-off capabilities and removes soldiers from physical danger, the temptation to use force may grow. Psychological distance is known to reduce empathetic concern and increase abstraction, which can weaken restraint when risk to one’s own force is low.[46] When there is no risk to one’s own troops, the political and emotional barriers to initiating conflict can erode.[47] This moral distancing, war by algorithm, threatens the very restraint that underpins ethical military conduct.[48] [49] Here, the role of military professionals as ethical stewards becomes even more critical. Soldiers must internalise that AI is a tool, not a moral agent. It cannot be blamed and it cannot absolve. Ethics education must evolve to cover AI scenarios explicitly, training leaders to challenge AI recommendations when they conflict with values or law.

Training environments also present risks. If trainees use AI-enabled planning tools without oversight, they may fail to engage critically with ethical trade-offs. Simulations that optimise for victory without considering collateral damage can desensitise leaders to moral consequences. Training must be deliberately structured to include ethical considerations, requiring students to exercise moral judgement, not just tactical and technical competence.

The Australian Army must also articulate ethical boundaries for AI, and codify its use in doctrine, policy and practice. This includes specifying where human decision-making is non-negotiable and defining acceptable levels of autonomy. The profession must lead, not wait for, policymakers. This is part of self-regulation: imposing standards on oneself to retain legitimacy and societal trust.

In summary, the rise of AI makes ethics and accountability more, not less, central to the profession of arms. Military professionals must ensure that human judgement remains at the core of every lethal decision, that AI systems are used transparently and responsibly, and that the ethos of service, courage and integrity is not diluted by automation. If AI is to serve the military, it must do so within the moral framework that defines the profession—not outside or above it.

Identity and Culture: Trust, Cohesion and Purpose in Human–Machine Teams

AI is changing how armies fight and how soldiers work in teams—but the profession remains human at its core. The military profession has always been defined not only by its expertise and ethics but also by a strong identity and culture. Trust, cohesion and purpose bind military units together. Human–machine teaming now tests all three.

First—trust. Human teams rely on mutual confidence built through shared hardship and demonstrated reliability. Now trust must extend to AI tools and autonomous systems to function reliably in combat.[50] Just as with new team members, this trust must be earned, and AI systems must prove themselves in training and operations. Studies have noted that ‘soldiers can develop a readiness to trust the AI systems soon to be integrated with warfighting teams’ by leveraging concepts of trust from human team cohesion.[51] However, any failure, whether it is an autonomous system varying from its input directives or a decision-support system producing an imprecise course of action (COA), can erode that trust quickly. Building trust in AI requires rigorous, transparent testing and realistic training. AI is fallible and its recommendations must be interrogated, not followed without question. Military culture must reinforce the point that responsibility always rests with the human. The danger is overconfidence in technology, not being aware of other options available or merely assuming the algorithm knows best. That path may lead to further unintended consequences. A countermeasure to this would be human skilling. This could see leaders generate and compare independent COAs and practise ‘AI-off’ drills, red-team challenges and cognitive-forcing checks so that questioning, overriding, and choosing the human-owned option becomes trained behaviour—not an act of courage.

Second—cohesion. Traditionally, soldiers bond through shared experiences: field exercises, deployments, danger. As AI takes over some tasks, especially in logistics, ISR and targeting, the structure of human teams may change. Soldiers may operate more independently or alongside autonomous systems rather than with fellow humans. This risks weakening traditional bonds of trust and mutual dependence. Military leaders must find new ways to build cohesion in hybrid human–machine teams.[52] This may include integrating systems into team routines and roles (i.e. giving names to autonomous systems or running man–machine competitions) so that they are treated as accountable team assets, not faceless black boxes. This limited ‘operational anthropomorphism’—the sort we already use with military working dogs—may aid care, protection and recovery. This might seem trivial, but organisation culture is built on symbols and shared stories.[53]

Humanising the machines or decision-making algorithms helps reinforce that they are part of the mission; but at the same time, a balance should be found against over-anthropomorphising machines. Like aircraft individualised through tail numbers, these systems remain equipment, not moral subjects—valued for mission contribution, and replaceable without the rituals reserved for people. It is therefore important to treat anthropomorphism as a tool, not a truth: allow it in a bounded way to cue efficiency, coordination and teamwork, while measuring and calibrating it through training (trust checks, question–verify–override drills) so that attachment never overrides judgement or accountability.

Third—purpose. The Australian Army’s ethos is built on the Defence values of service, courage, respect, integrity and excellence. These values are forged in adversity and tested in combat. But what happens when AI systems carry out the most dangerous missions? Does the role of the human soldier diminish? Some fear that the profession will lose its soul—that remote warfare and AI-driven operations will hollow out the sense of service and sacrifice.[54] But this future is not preordained. The military can and must adapt its concept of soldier identity. Courage in the AI era may mean taking moral risks, challenging an AI recommendation under pressure, or leading human–machine teams into uncertain operational environments. Honour remains in making hard decisions, bearing responsibility and protecting civilians. The profession endures when its values adapt to new contexts. Military culture has evolved before. The rises of aviation, cyber, and remote warfare all brought cultural challenges.[55] Uncrewed aerial vehicle operators and cyber soldiers don’t fit the mud-and-blood archetype, yet they have been integrated into the profession. The same can happen with AI-era roles like ‘AI officers’—so long as the core values are preserved.

Leadership styles may also need to adapt to account for the further integration of people with technology. A concept emerging is ‘AI command’ or ‘centaur leadership’, where a commander effectively commands both human subordinates and autonomous systems as part of the unit.[56] This requires clear communication of intent that both humans and AI can understand, possibly more standardised, data-driven command intent so that AI decision-support tools align with the commander’s goals.

The integration of AI also has implications for the civil–military relationship and societal trust, which are part of the profession’s broader identity. The Australian Army has a proud identity tied to values like mateship, courage, and loyalty, and it is keenly aware of serving the democratic society’s interests. As AI-driven warfare emerges, the Army will need to ensure the public understands that the Army’s values and human accountability remain paramount.[57] If the public perceives that the military is becoming a faceless force of robots that might act without human control, trust will be undermined. Thus, the Army’s culture of transparency and adherence to law must be clearly communicated. The Chief of Army’s initiative to focus on ‘the Army in society’ and clarify the Army’s purpose suggests recognition that societal licence for the Army’s actions might hinge on showing that even as the force modernises with AI, it remains firmly under ethical, human control.[58]

In summary, AI may reshape certain facets of military identity—teaming, tempo and decision cycles—but need not erode it. With deliberate leadership, adaptive culture and a firm hold on core values, the profession of arms can remain a human vocation. What remains to set professionals apart is not their hardware but their judgement, responsibility and commitment to each other and the society they serve.

Self-Regulation: Setting Ethical AI Boundaries

Professions are characterised not only by their expertise and service but by their ability to self-regulate—to set standards of conduct, enforce accountability and adapt norms as their practice evolves. For the Australian military profession, self-regulation operates within the framework of civilian control, but there remains a domain of professional autonomy in matters of discipline, doctrine and ethical guidelines.[59] In the face of AI-enabled warfare, how the Australian profession of arms establishes and enforces boundaries for AI use will be a critical test of its professionalism. It must ensure that this powerful technology is harnessed in ways consistent with military virtue, legal obligations and strategic prudence.

Firstly, the military can incorporate AI usage guidelines directly into doctrine and rules of engagement (RoE). Just as there are RoE and targeting directives shaped by context and mission, law and policy will need to codify limits on autonomous systems. Or it might prohibit deploying certain AI capabilities unless they meet reliability thresholds under realistic conditions (aligning with reliable and tested AI principles). The ADF could produce doctrine specifically on human–machine teaming, outlining best practices and required oversight. Self-regulation will involve the Army (and Defence enterprise) writing the ‘rulebook’ for AI in military operations before crises force ad hoc decisions. This internal rule-making reflects professional autonomy: military experts devising the tactics, techniques and procedures that both exploit AI and keep it within acceptable bounds.

The ADF needs AI-specific ethical principles. Modelling a first effort on the US Department of Defense’s clear commitments to traceability, reliability and governability is a reasonable starting point.[60] But before committing to new principles the Army should first test the current system: deliberate assessment and wargaming is needed to confirm where existing values, law of armed conflict obligations, the DMPP and acquisition gates already accommodate AI, and to identify genuine gaps. Where gaps persist, it may be necessary to integrate an AI employment code within doctrine so it is trained, planned and audited like any other obligation. For instance, a principle might state: ‘Commanders must be able to explain, challenge, and override AI decisions’. Another might be: ‘Lethal force will not be delegated to AI without human decision-making’. Service doctrine should specify contexts, decision rights and safeguards, ensuring a single professional standard expressed through different modes of control. The Australian Army profession’s moral licence to operate with AI depends on demonstrably ethical AI employment; ethics is therefore a governing requirement, not an add-on.

Other self-regulatory tools include internal advisory boards, ethical review processes for acquisitions, and pre-deployment testing standards for AI tools. These mechanisms ensure that new technologies are not fielded without adequate scrutiny.

The Australian Army has a strong foundation to build on. Chief of Army has framed professionalism through jurisdiction, expertise and self-regulation;[61] this article argues for extending the same frame to AI. The near-term task is to test current doctrine, RoE, and values against AI use, identify genuine gaps, and then embed updates where necessary. Self-regulation in the AI era is not about slowing down innovation—it is about ensuring that innovation serves, rather than subverts, the values that define the military profession. When the public see that the Army uses AI responsibly, ethically and transparently, it is hoped they will continue to grant Army the trust and autonomy it needs to operate.[62] That trust is earned, not assumed.

In summary, if the Australian Army wishes to exploit AI technology, it must also lead in setting rules for its use. That is the essence of self-regulation: using authority not for self-interest but in service of higher standards. The Australian Army must be seen as a capable and principled user of AI. Anything less puts the profession’s lasting and unwavering social contract at risk.

Implications for the Australian Army: the Future Professional Identity

For the Australian Army, the challenge posed by the rise of AI is not simply to adopt new technologies but also to test and adapt the institution’s identity, values and practices to an AI-enabled battlespace. The following six action areas represent priority lines of ‘AI’ effort:

  1. Institutionalise AI literacy and technical fluency

Make AI literacy a foundational requirement across all ranks by embedding AI principles, data reasoning, and automation awareness into all tiers of training and professional military education (PME). Introduce AI-enabled decision-support tools in scenario-based exercises, and establish formal AI-functional specialisations (i.e. ‘AI officers’). Partner with academic institutions to offer targeted postgraduate education in machine learning, data ethics and operational AI systems.

  1. Modernise doctrine and reinforce mission command

Update doctrine to reflect AI–human teaming, clearly defining where human decision-making is non-negotiable. Build AI systems that reinforce, not undermine, mission command—prioritising decentralised, context-sensitive tools that enhance junior leader autonomy. Use AI to compress decision loops, not to centralise control.

  1. Integrate AI ethics into Army culture and policy

Codify AI-specific ethical principles—such as traceability, overridability and meaningful human control—into doctrine, RoE, and command responsibilities. Train leaders to interrogate AI outputs and model ethical use. Establish internal ethics boards to review and audit AI tool deployment and procurement.

  1. Transform PME and career development pathways

Integrate AI tools into PME curriculum and exercises. Shift the focus of PME to critical analysis of AI-enabled operations. Create and support new career pathways for AI/data professionals in service. Encourage and sponsor exchange programs with technology partners and Defence science units.

  1. Strengthen public trust through transparency and engagement

Develop a strategic communications plan that explains how AI is used lawfully, ethically and with human oversight. Be open about constraints and safeguards. Engage broader society, academia and media to co-shape Australia’s AI defence narrative.

  1. Lead professionally on policy and self-regulation

Take proactive ownership of setting internal standards for AI testing, validation, deployment and accountability. Develop doctrine before crises occur, and represent Australia’s military values in shaping international norms around responsible AI use in warfare.

Conclusion

AI’s inexorable advancement seems set to change many aspects of military operations, but it need not undermine the fundamental nature of the profession of arms. Rather, it should prompt a renaissance of professional reflection and adaptation. An AI-enabled world challenges the premise that only humans should decide who is killed, and when. The Australian profession of arms must recommit to its values and adapt to this technological evolution.

The thesis of this article has been that while AI will change how military professionals carry out their missions, it does not change why or to what end they serve; nor does it absolve them of the responsibility for those missions. Military expertise is expanding to encompass new technological domains, but the essence of that expertise endures. Ethical and accountable conduct faces new tests from autonomy and opaque algorithms. Even this profession’s own moral compass, honed over centuries, shows a record of strengths and failures, which is why its ethics must be applied, tested and audited, and why authorisation of lethal force must remain a human decision with accountable ownership. The identity and the culture of soldiers are evolving to include technical prowess and human–machine trust, but the camaraderie, courage and honour that define military units remain as vital as ever. And the profession’s capacity for self-regulation is being exercised vigorously to set boundaries on AI use consistent with laws and values, thereby maintaining public trust in an era of fast-paced change.

For the Australian Army, answering ‘What does the profession of arms look like in an AI-enabled world?’ has practical urgency. It looks like an Army that is smarter, faster and more integrated, that leverages AI for superiority yet also anchors innovation in ethics and expertise. It is, ultimately, an Army that harnesses the strengths of AI without surrendering the virtues of the soldier. The enduring value of the profession, in the face of disruptive change, is that it provides the human judgement, ethical restraint and institutional wisdom that ensure new technology is used wisely and justly in the nation’s service. In that sense, the more things change, the more the core ideals of the profession of arms must stay the same—and indeed, they remain essential for navigating the uncharted terrain of AI-enabled warfare.

Biography

Major Matthew Jones is an Army Reserve infantry officer and paediatric surgeon currently serving as SO2 Strategic Analysis at the Australian Army Research Centre. He has commanded at a company level, posted as an instructor on the Combat Officers’ Advanced Course, and deployed on multiple domestic operations. A recent graduate of the Australian Command and Staff Course, he holds a PhD in paediatric surgery, a Doctor of Medicine and a Master of Surgery, among other qualifications. His current research focuses on the Chief of Army’s priority research area ‘Defence of Australia/Homeland Defence’. He lives in Melbourne with his wife, Natalie, and their two daughters.

Endnotes

[1] Simon Stuart, ‘Strengthening the Australian Army Profession’, address, Lowy Institute, 3 April 2025.

[2] Australian Defence Force, ADF Capstone Doctrine Australian Military Power (Canberra: Commonwealth of Australia, 2024).

[3] John Hackett, The Profession of Arms (MacMillan Publishing Company, 1983).

[4] Samuel P Huntington, The Soldier and the State: The Theory and Politics of Civil-Military Relations (Harvard University Press, 1957).

[5] Simon Stuart, ‘The Challenges to the Australian Army Profession’, presentation, National Security College, Australian National University, 25 November 2024.

[6] John Hackett, ‘Lecture 1, Origins of a Profession’: The 1962 Lees Knowles Lecture by Lieutenant General Sir John Winthrop Hackett (Trinity College, Cambridge, 1962).

[7] Toby Walsh, ‘The Defence Review Fails to Address the Third Revolution in Warfare: Artificial Intelligence’, The Conversation, 28 April 2023, at: https://theconversation.com/the-defence-review-fails-to-address-the-third-revolution-in-warfare-artificial-intelligence-204619.

[8] Iain Robinson, ‘How Will Emerging Technological Revolutions Including Artificial Intelligence and Robotic Autonomous Systems Impact the Command and Control of Land Operations?’, Australian Army Journal 21, no. 2 (2025).

[9] Kateryna Bondar, ‘Ukraine’s Future Vision and Current Capabilities for Waging AI-Enabled Autonomous Warfare’, Center for Strategic and International Studies (website), 6 March 2025, at: https://www.csis.org/analysis/ukraines-future-vision-and-current-capabilities-waging-ai-enabled-autonomous-warfare.

[10] Harshit Mishra and Shripati Dwivedi, ‘Economic Analysis of AI Integration in Internet of Drones (IoD)’, in Jahan Hassan, Sara Khalifa and Prasant Misra (eds), Machine Learning for Drone-Enabled IoT Networks: Opportunities, Developments, and Trends (Springer, 2025).

[11] Paul K Davis and Paul Bracken, ‘Artificial Intelligence for Wargaming and Modeling’, The Journal of Defense Modeling and Simulation 22, no. 1 (2022), at: https://doi.org/10.1177/15485129211073126.

[12] James Johnson, ‘Automating the OODA Loop in the Age of Intelligent Machines: Reaffirming the Role of Humans in Command-and-Control Decision-Making in the Digital Age’, Defence Studies 23, no. 1 (2023), at: https://doi.org/10.1080/14702436.2022.2102486.

[13] Wes Shinego, ‘Defense Officials Outline AI’s Strategic Role in National Security’, U.S. Department of Defense (website), 24 April 2025.

[14] Sydney J Freedberg Jr, ‘A Slew to a Kill: Project Convergence’, Breaking Defense, 16 September 2020, at: https://breakingdefense.com/2020/09/a-slew-to-a-kill-project-convergence.

[15] ‘AUKUS Pillar II Milestones Hint at Future Integrated Autonomous, Artificial Intelligence Operations’, U.S. Department of Defense (website), 9 August 2024.

[16] Antonín Jančařík and Ondřej Dušek, ‘The Problem of AI Hallucination and How to Solve It’, Proceedings of the 23rd European Conference on e-Learning (ECEL, 2024), pp. 122–128.

[17] Dag Øivind Madsen and Richard W Puyt, ‘The 7Vs of AI Slop: A Typology of Generative Waste’, SSRN, 2 October 2025, at: https://dx.doi.org/10.2139/ssrn.5558018.

[18] Walsh, ‘The Defence Review Fails to Address the Third Revolution in Warfare’.

[19] Stuart, ‘Strengthening the Australian Army Profession’.

[20] Robinson, ‘How Will Emerging Technological Revolutions Including Artificial Intelligence and Robotic Autonomous Systems Impact the Command and Control of Land Operations?’.

[21] Shyam Sankar, The Defense Reformation (Palantir, 2024), at: https://www.18theses.com.

[22] Johnson, ‘Automating the OODA Loop in the Age of Intelligent Machines’.

[23] Ida Lindgren, ‘Ironies of Automation and Their Implications for Public Service Automation’, Government Information Quarterly 41, no. 4 (2024), at: https://doi.org/https://doi.org/10.1016/j.giq.2024.101974, at: https://www.sciencedirect.com/science/article/pii/S0740624X24000662.

[24] Robert J Sparrow and Adam Henschke, ‘Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming’, Parameters 53, no. 1 (2023), at: https://doi.org/doi:10.55540/0031-1723.3207.

[25] Marianne Bellotti, ‘Helping Humans and Computers Fight Together: Military Lessons from Civilian AI’, War on the Rocks, 15 March 2021, at: https://warontherocks.com/2021/03/helping-humans-and-computers-fight-together-military-lessons-from-civilian-ai.

[26] Kalle Saastamoinen, Antti Rissanen and Arto Mutanen, ‘Intelligent Learning in Studying and Planning Courses—New Opportunities and Challenges for Officers’, International Baltic Symposium on Science and Technology Education (2023), at: https://eric.ed.gov/?id=ED629134.

[27] Michael S Farmer, ‘Four-Dimensional Planning at the Speed of Relevance: Artificial-Intelligence-Enabled Military Decision-Making Process’, Military Review 102, no. 6 (2022), at: https://research.ebsco.com/linkprocessor/plink?id=579383a4-35e3-3ff9-a35a-e91119eeef1f.

[28] Mick Ryan, ‘Extending the Intellectual Edge with Artificial Intelligence’, Australian Journal of Defence and Strategic Studies 1, no. 1 (2019).

[29] Stuart, ‘Strengthening the Australian Army Profession’.

[30] Paddy Walker, ‘Leadership Challenges from the Deployment of Lethal Autonomous Weapon Systems’, The RUSI Journal 166, no. 1 (2021), at: https://doi.org/10.1080/03071847.2021.1915702.

[31] Tang Tang, Yue Wang, Li-juan Jia, Jin Hu and Cheng Ma, ‘Close-in Weapon System Planning Based on Multi-Living Agent Theory’, Defence Technology 18, no. 7 (2022).

[32] Robert Sparrow, ‘Killer Robots’, Journal of Applied Philosophy 24, no. 1 (2007).

[33] Bonnie Docherty, ‘Making the Case: The Dangers of Killer Robots and the Need for a Preemptive Ban’, Human Rights Watch, 9 December 2016, at: https://www.hrw.org/report/2016/12/09/making-case/dangers-killer-robots-and-need-preemptive-ban.

[34] Maciej Marek Zając, ‘Autonomous Weapon Systems Impact on Incidence of Armed Conflict: Rejecting the “Lower Threshold for War Argument”’, Ethics and Information Technology 27, no. 3 (2025), at: https://doi.org/10.1007/s10676-025-09847-0, https://doi.org/10.1007/s10676-025-09847-0.

[35] Nishank Motwani, ‘The Danger of AI in War: It Doesn’t Care about Self-Preservation’, The Strategist, 30 August 2024, at: https://www.aspistrategist.org.au/the-danger-of-ai-in-war-it-doesnt-care-about-self-preservation.

[36] Anna-Katharina Ferl, ‘Imagining Meaningful Human Control: Autonomous Weapons and the (De-) Legitimisation of Future Warfare’, Global Society 38, no. 1 (2024), at: https://doi.org/10.1080/13600826.2023.2233004.

[37] Madeleine Clare Elish, ‘Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction’, Engaging Science, Technology and Society 5 (2019), at: https://doi.org/https://doi.org/10.17351.

[38] Tate Prosser, ‘Ethical Targeting: Lessons in Contemporary Warfare’, The Cove, 13 October 2025, at: https://cove.army.gov.au/article/ethical-targeting-lessons-contemporary-warfare.

[39] ‘DOD Adopts Ethical Principles for Artificial Intelligence’, U.S. Department of Defense (website), 24 February 2020.

[40] Robinson, ‘How Will Emerging Technological Revolutions Including Artificial Intelligence and Robotic Autonomous Systems Impact the Command and Control of Land Operations?’.

[41] Kate Devitt, Michael Gan, Jason Scholz and Robert Bolia, A Method for Ethical AI in Defence (Canberra: Defence Science and Technology Group, Department of Defence, 2020).

[42] ‘Summary of the NATO Artificial Intelligence Strategy’, NATO (website), 22 October 2021, at: https://www.nato.int/cps/en/natohq/official_texts_187617.htm.

[43] ‘Autonomous Weapons: The ICRC Recommends Adopting New Rules’, International Committee of the Red Cross (website), 3 August 2021, at: https://www.icrc.org/en/document/autonomous-weapons-icrc-recommends-new-rules.

[44] Bethan McKernan and Harry Davies, ‘The Machine Did It Coldly’: Israel Used AI to Identify 37,000 Hamas Targets’, The Guardian, 4 April 2024, at: https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes.

[45] Cristián Silva Pacheco and Carolina Iturra Herrera, ‘A Conceptual Proposal and Operational Definitions of the Cognitive Processes of Complex Thinking’, Thinking Skills and Creativity 39 (2021), at: https://doi.org/10.1016/j.tsc.2021.100794.

[46] Y Trope and N Liberman, ‘Construal-Level Theory of Psychological Distance’, Psychological Review 117, no. 2 (2010), at: https://doi.org/10.1037/a0018963.

[47] Thomas Simpson and Vincent Müller, ‘Just War and Robots’ Killings’, The Philosophical Quarterly 66, no. 263 (2016).

[48] Ruud Hortensius and Beatrice de Gelder, ‘From Empathy to Apathy: The Bystander Effect Revisited’, Current Directions in Psychological Science 27, no. 4 (2018), at: https://doi.org/10.1177/0963721417749653.

[49] Sparrow, ‘Killer Robots’.

[50] Andrew Ilachinski, Artificial Intelligence and Autonomy: Opportunities and Challenges (Arlington VA: Center for Naval Analyses, 2017), at: https://www.cna.org/reports/2017/DIS-2017-U-016388-Final.pdf.

[51] Marlon W Brown, ‘Developing Readiness to Trust Artificial Intelligence Within Warfighting Teams’, Military Review (January–February 2020), at: https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/January-February-2020/Brown-AI-ready.

[52] Alex Neads, David J Galbreath and Theo Farrell, From Tools to Teammates: Human-Machine Teaming and the Future of Command and Control in the Australian Army, Australian Army Occasional Paper No. 7 (2021).

[53] JL Zazzali, JA Alexander, SM Shortell and LR Burns, ‘Organizational Culture and Physician Satisfaction with Dimensions of Group Practice’, Health Services Research 42, no. 3(1) (2007), at: https://doi.org/10.1111/j.1475-6773.2006.00648.x.

[54] Scott Humr, ‘Protecting Our Warrior Ethos Tomorrow’, Proceedings 144(2) (2018), at: https://www.usni.org/magazines/proceedings/2018/february/protecting-our-warrior-ethos-tomorrow.

[55] Captain J, ‘Examining the Australian Army Adaptation to Cyber-enabled Warfare—Organisational and Cultural Challenges’, Australian Army Journal 14, no. 2 (2018).

[56] James Johnson, The AI Commander: Centaur Teaming, Command, and Ethical Dilemmas (Oxford University Press, 2024).

[57] Boyd Clifford, ‘Humanism in the Australian Army: Adapting to the Changing Character of War’, The Cove, 27 February 2025, at: https://cove.army.gov.au/article/humanism-australian-army-adapting-changing-character-war.

[58] Stuart, ‘Strengthening the Australian Army Profession’.

[59] Thomas H Lillie, ‘Book Review: Mightier Than the Sword: Civilian Control of the Military and the Revitalization of Democracy’, Armed Forces & Society 51, no. 3, at: https://doi.org/10.1177/0095327x241278891.

[60] ‘DOD Adopts Ethical Principles for Artificial Intelligence’.

[61] Stuart, ‘The Challenges to the Australian Army Profession’.

[62] Tim McFarland, ‘Reconciling Trust and Control in the Military Use of Artificial Intelligence’, International Journal of Law and Information Technology 30, no. 4 (2022), at: https://doi.org/10.1093/ijlit/eaad008.