Skip to main content

A Framework for Safely Scaling Multi-Agent Teams

Journal Edition
DOI
10.61451/210206

Introduction

Studies have demonstrated human-machine teaming (HMT) to be an effective and strategic mechanism for combining the strengths of human skills and knowledge with the benefits of robotic, autonomous and artificial intelligence (RAS-AI) enabled capabilities.[1] In a military context, HMT provides a way for ‘humans to operate in tomorrow’s faster, more data-heavy and more autonomous battlefield.'[2] The advent of HMT has also paved the way for other forms of collaborative military operations, including human-swarm teaming (HST), also referred to as swarming. Swarming is a structured and coordinated collaboration of multiple capabilities, of the same type, to achieve a strategic outcome.[3]

While HST and HMT have proven benefits, particularly in combat scenarios with dynamic operating environments, integrating these teaming capabilities together is not a straightforward process. Both HST and HMT come with their own sets of unique challenges which are exacerbated when implemented at scale and in operations which combine these two forms of teaming. Collectively, HST and HMT are referred to as multi-agent teaming.

The Australian Army’s hyper-teaming project is an example of Army preparing for combined scaled operations using a multi-agent teaming approach. This project brings together RAS-AI systems, across both air and land platforms, to achieve adaptive, efficient and resilient teaming and swarming outcomes. Multi-agent teaming is expected to support complex, multi-phase intelligence, surveillance and reconnaissance (ISR) related tasks and aims to increase the efficiency and scale of ISR on operations.

In these hyper-teaming operations, the robotic agents collaboratively identify targets within an area of interest and report back to their human teammates. In some cases, the robotic agents move towards objectives in combination with their human teammates. The order of movements means of communication, overwatch and surveillance priorities are determined by humans. Subject to the robotic agents’ level of autonomy, there may be degrees of variation to the order of movements if the robotic agents are responding to a dynamic operating environment in real time. The extent of possible variations is bounded by the machine’s functions and capabilities.[4]

These hyper-teaming operations present an opportunity to strategically navigate dynamic and high-risk environments. The challenge, however, is to mitigate the additional complexities and risks that come when humans operate alongside RAS-AI capabilities. In teaming contexts, these capabilities are implemented as teammates rather than tools,[5] adding a social layer to these considerations.

This article will explore three key research questions:

  1. What is the impact of agentive composition of swarms on the planning and facilitation of multi-agent teaming operations?
  2. How do the different communication structures in multi-agent teams shape the interdependence between actors and impact the safety of the operation?
  3. How do the goals of independent agents impact the safety of multi-agent teams and what methods of goal definition can be implemented to ensure system safety?

This article will explore each of these research questions from a technical perspective, focusing on the safety implications that come with multi-agent teaming. The purpose of the analysis is to further develop the body of knowledge around multi-agent teams, highlighting the significance of teaming compositions and the importance of systematically defining and actualising goals. Based on the findings and the context in which they are made, the article presents a method for safely scaling multi-agent teams.

While this article focuses on the technical considerations of HST and HMT, teaming itself is a social construct and therefore considerations around teaming are not just technical. Social considerations, particularly trust, contribute to the overall safety and effectiveness of a teaming operation. The importance of these considerations would furnish a paper in its own right, so further analysis is omitted from this short work.

HST and HMT have many similarities but also a number of differences that distinguish them. As hyper-teaming involves the combination of HST and HMT, it is important to understand these distinctions. One of these distinctions is around how agency is conceptualised within a swarm. In this regard, it is important to understand whether the swarm is one agent made up of multiple systems working towards the same goal, or whether instead it is made of multiple independent agents with local goals that ultimately contribute to a broader goal. Planning and facilitating operations with swarms, particularly in the case of hyper-teaming in which swarms are integrated into broader HMT, requires an understanding of agentive composition. This understanding dictates how goals are defined and actualised within an operation. This topic is explored in detail in Section 2.

Section 3 delves further into the distinction between HST and HMT, focusing on the differences in communication structures from three different perspectives: roles, responsibilities and interdependencies between agents within the team; coordination of communication between team members; and situational awareness through shared cognition. Section 4 presents a method for defining goals for multi-agent teams. Section 5 then offers a framework for safely scaling teaming operations, which is applied to Army’s hyper-teaming operations as a case study analysis.

Conceptualising Agency

The notion of agency for cognitive systems is defined by three attributes that an agent should possess. The agent should be capable of acting independently, it should be capable of reacting to its environment, and its actions should be in pursuit of an identified goal.[6]

Swarms represent a collective conception of agency, with swarm intelligence (SI) emerging from decentralised and self-organised systems that follow simple patterns of behaviour. SI, a subset of AI, refers to the ‘emergent collective intelligence of groups of autonomous agents’.[7] The concept of SI for artificial systems developed from the field of cellular robotic systems, which involves cooperation between machines to achieve a predefined goal or task. In cellular robotic systems, the concept of SI was used to describe self-organisation of machines through nearest neighbour interaction.[8] There are two factors that distinguish SI from cellular robotic systems. Firstly, cellular robotic systems involve a finite number of robots operating in a finite space. Secondly, they function against limited communication between adjacent robots.[9]

With SI, there are no boundaries confining the number of machines and the space within which they must operate. Further, communication is not limited to neighbour interaction.[10] In swarms, complex dynamics emerge from the patchwork of interactions between the individual agents that comprise the larger system. The individual agents that make up multi-agent systems are self-organised entities that respond to local information within their domain of possible interactions. Decentralised and self-organised systems exhibit complex behaviours that shape collective notions of individuality and agency. Individuality is one of the core attributes of agency. If a swarm is composed of agentive systems that are distinguishable as individual entities, it is more difficult for the collective to coordinate interactions as one entity.

SI emerged from cellular robotic systems, which involves cooperation between machines to achieve a predefined goal or task. Given its genesis, it might be expected that the foundational ideology of the swarm agency concept aligns with that of cellular robotic systems. However, more recent language and research around SI suggests that swarms could be perceived as one collective agent.[11]

Minar et al. describe swarms as a ‘group of agents and their schedule of activity.'[12] This description leans into the idea of swarms being a cohesion of individual agents; however, ‘schedule of activity’ suggests collective emergent behaviour. The authors’ assertions regarding the strategic advantages of individual agents within a swarm supports the ideology that swarms pursue localised goals and support individual goal-fulfilling behaviour. In order to direct swarms towards more collective, global outcomes, Walker et al. propose a mechanism in which individual agents are selected as leaders among the group.[13] In making this suggestion, Walker et al. acknowledge that agents within swarms act individually and that having humans guide the swarm ‘leaders’ is a necessary mechanism to ensure that one global goal is actualised. By contrast, Giles and Giammarco[14] do not appear to subscribe to the notion of individuality within swarms. Instead, they view swarms as constituting one collective agent that pursues one global goal.

The varied descriptions of swarms and their behaviour invite multiple interpretations of agency for swarms. When considering how swarm technology might be used, it seems each of the suggested interpretations of agency holds true under different contexts. Take for example, a swarm of drones used for mapping large areas.[15] In this example, the swarm consists of individual agents cooperating to achieve a collective and easily segmentable goal—mapping a large area. Each drone captures a particular segment of a broader area and the collation of data from the drone swarm provides enough information to map one large area. Each drone has its own task that contributes to the broader goal. So the absence of one agent within the swarm would impact the overall goal because a portion of the area being mapped would be missing. In the context of this example, the idea of swarms being a cohesion of individual agents would be most appropriate, as each agent is working towards their own individual goal. However, if the swarm were to have self-healing mechanisms[16] (also referred to as resilience[17]), the remaining agents should theoretically have the capacity to act as self-organised systems, redistributing responsibilities to ensure the broader goal is fulfilled. In such a situation, one or more agents would map the missing area in addition to their own allocated areas. In this approach, the notion of a swarm being defined as one collective agent would be more fitting, as the entities that comprise the swarm are altering their behaviour as a means of fulfilling the collective goal.

When considering these examples, it can be seen that an entity’s capacity to act plays a role in how collective tasks are handled, and this has significance for how one considers swarms. The multiple perspectives of swarms and SI presented in the literature may in fact be reflective of the significance of context in swarm operations. It might not be possible to confine swarms and SI within one definition as the context and capacity to act impact how swarms are organised and therefore defined. The notion of agency for swarms may be viewed as a cohesion of individual agents when goals are defined and actualised on more localised levels. In cases of global goals and goal actualising behaviour, the idea of one collective agent is more appropriate.

In the case of hyper-teaming, which can involve swarms operating within an HMT, understanding the agentive composition of the swarm, and how goals are defined, will be critical to effectively integrating swarms with an HMT operation. The differences between HST and HMT dictate how communication is managed, how roles and responsibilities are distributed, and the level of situational awareness among human and non-human agents within the operation. This topic is explored further in the following section.

One-to-Many, Many-to-One

The notion of HMT (also commonly referred to as human-autonomy teaming[18]) encompasses the concept of teams. The term ‘team’ can be understood to mean a group of individuals working together to accomplish a goal, where there is an element of interdependence and combined efforts.[19] The term also implies a discrete lifespan, distributed expertise and clearly defined roles.[20] Some studies suggest that a capability can be considered a member of a team if it has the capacity to take on roles and responsibilities and to function interdependently.[21] Teaming extends beyond one-to-one interactions, often including multiple heterogeneous agents—human and non-human—in the broader HMT system, with each agent assigned their own roles and responsibilities. Here, a system is described as a composition of multiple parts and is often defined by the interactions between those parts.[22] When conceptualising a system, it should be considered as a whole, as Whitchurch and Constantine[23] describe:

Wholeness is characteristic of systems because there are properties or behaviours of the system that do not derive from the component parts themselves when considered in isolation. Rather, these emerge from their specific arrangement in a particular system and from the transactions among parts made possible only by that arrangement. These are called emergents or emergent properties because they emerge only at the systemic level.

In HMT operations, machines do not replace humans; rather, the collaboration between human and machine achieves ‘outputs that neither machines nor people could deliver independently’.[24]

Before moving forward, it is instructive to note the difference between HMT and human-machine interaction (HMI) for the purpose of clarity in terminology. The concept of HMI was popularised in the 1980s and, at the time, it characterised a dialogue between humans and computers.[25] HMI involves actions by a human that elicit immediate responses by a machine through physical communication prompts, such as pushing buttons, reading dials or responding to warning signals or messages. In this dynamic, there is little uncertainty on the part of either the human or the machine. The human is limited in their interaction capacity and the machine is limited in its ability to respond. These limited interactions incur little uncertainty and invite little or no opportunity for negotiation between the two parties. In comparison, the concept of HMT extends beyond the one-to-one interactions that are seen in HMI.

There exist numerous proposed definitions of HMT—no consensus has yet been reached on a single definition. Many of these definitions articulate a narrative in which HMT involves pursuit of ‘shared’, ‘common’ or ‘aligned’ goals.[26] In fact, it would be more precise to say there is an overarching system goal and, in order to achieve that goal, the goals of the human teammate and the goals of the machine teammate are ‘aligned’. The overall system goal is set by human decision-makers, such as operators and developers. This overall system goal is the function or purpose of the HMT system. Given the overall system goals, these are then translated for the machine into objective functions and reward functions—collectively, ‘functions’. The machine is designed to optimise these functions to achieve required outputs.[27] The machine’s goals are aligned to the human teammate’s goals in order to achieve the overall system goal. The goals are ‘aligned’ rather than ‘common’ or ‘shared’ because there are two categorically distinct kinds of goals here: the human’s operational goals and the optimisation of the machine’s functions.[28] The human’s goals are largely qualitative and the machine’s goals are necessarily quantitative.

Working from the literature underpinning the concept of teams and that of HMT, and considering the conceptual understanding of aligned goals, the following definition for HMT is proposed:

A combination of human and machine agents working together towards a system goal that is achieved through a set of aligned goals.

Theoretically, HST should align with this proposed definition of HMT; however, there are some distinctions between the two. Firstly, the composition of a team differs between HST and HMT. Literature on biological swarms pertains to insects or animals of the same species.[29] Equally, artificial swarms have followed a similar pattern, with HST comprising elements with the same technology capability—for example, drone swarms. This does not mean swarms cannot be implemented across an integrated platform; rather, the swarm itself only comprises one type of capability. In comparison, HMT can consist of multiple different capabilities and has been described in the literature as a multi-capability system.[30] Teaming in this scenario often involves collaboration across integrated platforms.

The second distinction between HST and HMT relates to coordination. As Kolling et al. describe, artificial swarms:

involve coordination between robots that relies on distributed algorithms and information processing. Because of this, global behaviors are not explicitly stated and, instead, emerge from local interactions. In such cases, the individual robots themselves likely could not act independently in any successful manner.[31]

By comparison, HMT demonstrates what can be characterised as greater levels of independence. While the agents within an HMT do coordinate to achieve a shared goal, their behaviours are self-interested and centred around fulfilling their designated outputs.[32]

The third distinction between HST and HMT pertains to communication structures within the team. The rest of this section of the article details communication structures in teaming operations, for both HST and HMT, from three different perspectives. The first is understanding roles, responsibilities and interdependencies between agents within the team. The second is determining coordination of communication between team members. The third perspective explores situational awareness through shared cognition.

Understanding Roles, Responsibilities and Interdependencies

In HMT, roles and responsibilities are independent for each agent and interdependent within the broader team.[33] That is, each agent is capable of operating independently to achieve individual goals. The interdependency manifests in the cooperation to achieve a broader system goal, where the system represents the HMT. Each agent is capable of fulfilling their roles and responsibilities independently; however, the broader system goal cannot be effectively fulfilled in the absence of one of the agents within the team—be they human or non-human.

By comparison, in self-healing or resilient swarms, roles and responsibilities are all interdependent, so individual agents could not successfully operate independently. Therefore artificial swarms are generally more robust, flexible and scalable than HMT.[34] Equally, swarms which are conceptualised as individual agents working towards a collective goal will operate similarly to an HMT structure.

One of the distinguishing factors of teaming is that it involves more than one human and one machine.[35] Each agent—both human and non-human—possesses roles and responsibilities which contribute to a particular goal, be it a localised goal or a global system goal.[36] As the numbers and types of agents in a system increase, so too does the complexity of the interdependencies between these agents. As Rusbult and Van Lange state, ‘interaction is shaped by broader considerations’.[37] The challenges of managing these interdependencies (which include information sharing and coordinating communication) increase as the number of agents, human and non-human, increases.

In the case of hyper-teaming, which includes a combination of swarming and HMT, the interdependencies will be even more complex. The significance of interdependence reveals itself in situations where errors, malfunctions or disruptive deviations to an operation occur. These instances can lead to a snowball effect on other agents within the teaming operation.

Coordination and Assurance of Communication

As teaming involves multiple interdependent agents, coordinating communication between these heterogeneous entities is essential for effective and successful teaming operations. In fact, team cognition is often characterised by communication and coordination processes.[38] The challenge for human agents within an HMT will be to effectively navigate reciprocal and dynamic communication.[39] Coordinating which agents need to communicate with one another, at what time and in what format—verbal, signalling, text etc.—is critical to effective teamwork. For HST with resilient swarm structures, communication is coordinated within the swarm structure, and is not managed or directed by human agents.[40] Additionally, as swarms are made up of one type of agent—for example, drones—the format of communication will be consistent across agents.

In addition to coordinating communication, humans need assurance of the information being communicated (notably, this requirement will be essential for safety-critical information). Particularly in dynamic operating environments, effective communication requires accurate and consistent information.[41] Team process will inevitably be affected by the tolerances on the accuracy and integrity of information, as well as the robustness of that information, which is demonstrated in the availability and continuity of information between interdependent agents. Depending on the nature of the operation, these assurances may need to be provided in real time.

Assurance mechanisms for information may differ depending on the format, content and purpose of that information. In general terms, quantifiable metrics are more accepted in safety-critical environments because they are repeatable and justifiable.[42] For example, the aviation industry has regulatory requirements for the real-time assurance of positioning data, which are standardised quantifiable measures.[43] Quantifiable assurance mechanisms take time. While this time may appear short when considering advanced computational capabilities (perhaps a matter of minutes or seconds) even this latency may be a hindrance in a dynamic operational environment.

For hyper-teaming operations, it is important to determine how information is being coordinated between agents and between teaming compositions—swarming and HMT. Information deemed safety critical should be identified alongside mechanisms for assuring that information, be it quantifiable or otherwise.

Situational Awareness through Shared Cognition

Situational awareness can be facilitated through shared cognition, in real time, of team members’ roles and responsibilities, goals, limitations etc.[44] When considering the heterogeneity of HMT systems, dynamic operating environments can generate emergent behaviours at the system level.[45] These behaviours result in diversity in the body of knowledge held by individual agents within a system.[46] Sharing such knowledge among other team members contributes to effective team processes.[47] When considering the interdependencies of HMT systems, situational awareness through shared cognition is essential for actualising goals at both the local and system levels.[48]

There is an abundance of information that can be made available from the digitally enabled agents which make up an HMT. The challenge is finding the balance between quality and quantity of information. Data is a collation of unprocessed facts, while information is data that has been given meaning through context and interpretation. The transition from data to information takes time, and more data does not always equate to more information.[49]

It is unsustainable and impossible to effectively utilise every single data point collected in an operation. Nevertheless, choosing to omit certain information may lead to misrepresentation of the operation, as well as of the respective operating environment. Understanding what information is important and relevant, and prioritising that information, will be key to effective and safe operations. This will be a core function of human commanders and operators within HST and HMT.

It is important to note the connection between goals and situational awareness. Goals can be used to define how much situational awareness (and therefore information) is required for each agent.[50] What is and is not relevant information to each agent—human and non-human—will be heavily dependent on the goals of that agent.

For resilient HST, situational awareness is facilitated in the swarm structure and is not reliant on human agent intervention.[51] In the case of hyper-teaming, there must be shared cognition between the swarm and the broader HMT to ensure collective situational awareness for the broader hyper-team.

A summary of the difference of communication structures between HST and HMT is provided in Table 1.

Table 1. Summary of the difference between HST and HMT communication structures
  HMT HST
Understanding roles, responsibilities and interdependencies Roles and responsibilities are independent for each agent and interdependent within the broader team. In self-healing or resilient swarms, roles and responsibilities are all interdependent and, as such, individual agents could not successfully operate independently.
Coordination and assurance of communication

Requires coordination of communication and information sharing between interdependent agents.

Assurance of safety-critical information is required.

For resilient swarm structures, communication is coordinated within the swarm structure, and is not managed or directed by human agents.

Assurance of safety-critical information is required.

Situational awareness through shared cognition Situational awareness is achieved through shared cognition between agents. For resilient HST, situational awareness is facilitated within the swarm structure and is not reliant on human agent intervention.

While there are differences in the communication structures across HST and HMT, what is consistent across these multi-agent teams is the requirement for goal definition. This is true for both individual and collective goals.

A Method for Defining Goals for Multi-Agent Teams

In the context of this article, goals are defined simply as ‘objectives of the system’. Morasky describes the two major functions of goals as facilitating system control and system evaluation.[52] The effectiveness of a system is dependent on the system’s ability to achieve a desired state or goal, and that desired state or goal is the driving force behind system behaviour. In the context of systems theory, systems are ‘understood as a whole and cannot be comprehended by examining its individual parts in isolation from each other’.[53]

Alignment of goals in multi-agent systems is a form of coordination.[54] In addition to system effectiveness, goal definition is a factor in system safety.[55] Incorrectly defining goals can lead to unexpected system outputs and behaviours. Literature on this topic often distinguishes between soft goals (high-level objectives for the non-functional capabilities of the system) and hard goals (high-level objectives for the functional capabilities of the system).[56] These are also referred to as functional and non-functional goals.[57] The granularity of this distinction is not within the scope of this article.

Beyond delineating soft and hard goals, literature around goal definition for multi-agent teams is sparse.[58] There are, nevertheless, some studies on optimisation in different contexts that are relevant.[59] Based on analysis of this material, this article proposes the following guidelines for defining goals for multi-agent teams. These guidelines are intended to aid users in effectively defining goals for multi-agent teams, to facilitate system evaluation and to ensure system safety:

Be clear and specific so that goals can be objectively interpreted

Goal specificity encourages a means–end analysis approach, thus supporting the pursuit of behaviours that transform the current state into the specific goal state.[60] There is also literature related to ‘goal setting theory’ which argues that specific goals lead to higher levels of task performance in comparison to vague and/or unclear goal definitions.[61] This being the case, goals must be defined clearly and with specificity to ensure cohesion among the multiple agents that comprise an HMT or HST.

Distinguish between individual local goal(s) or collective global goal(s)

As discussed in Section 2, the concept of agency for swarms is dynamic and highly dependent on context. There may be situations in which swarms consist of agents working towards individual goals, while other contexts may involve multiple agents working towards one collective goal. When defining goals, it is important to distinguish between the two. Defining one goal for the collective will differ from defining multiple individual goals. For HMT, there will be both individual goals and a broader system goal.

Specify time requirements

Generating effective systems depends on the achievement of effective coordination among multiple agents.[62] In turn, cohesive coordination is dependent on time parameters. If time parameters are not specified, each agent may fulfil their goals across varying timeframes. In a static environment, this may not cause concern. However, in the dynamic environments that commonly characterise military operations, time variations in task completion will impact the accuracy and validity of the information generated. If one agent returns data in 10 minutes and another returns data in 30 minutes, the difference of 20 minutes in safety-critical settings is significant enough to deem information no longer usable.[63]

Make goals measurable or quantifiable

If goals are not measurable or quantifiable, it is difficult to know when a goal has been achieved. For multi-agent teams (which operate with multiple parallel and complementary goals), it must be possible to clearly determine whether a goal is complete, and the assessment must be binary in nature: the goal either has or has not been achieved. If the assessment is left open to multiple interpretations, there is no clear measure of effectiveness, and this means that military operations cannot be fulfilled effectively.

Challenges of Scale

There are a number of challenges that come with scaling multi-agent teams. Of these, the following three are of focus for this article: judgements of risk, latency in decision-making, and maintaining communication structures. These challenges and their implications are discussed in turn.

Judgements of Risk

A state of zero risk will never exist, because risks can only be mitigated; they cannot be eliminated.[64] For multi-agent teaming operations, there must therefore be an understanding of what constitutes an acceptable level of risk. The principle of as low as reasonably practicable (ALARP) is commonly implemented in the civil sector for managing risks.[65] However, the military domain operates under more nuanced risk thresholds where the concept of what is reasonably practicable will differ from the concept in the civil domain.

In the case of scaled teaming operations (which comprise an amalgam of goals between the many agents within the team), judgements of acceptable risk will need to be considered individually and in relation to the potential impacts of identified risks on the broader teaming operation. Additionally, for non-human agents with roles, goals or functions deemed safety critical, there may need to be a human point of assurance or redundancy. In a scaled operation, such as hyper-teaming (which includes many non-human agents), it will be particularly challenging to maintain the human element across these systems in real time.

Latency in Decision-Making

When teaming operations are conducted in dynamic operating environments, decisions will need to be made for the whole duration of the operation. The benefit of digital systems is their capacity to analyse vast amounts of data in much shorter timeframes than a human could manage. In multi-agent teaming operations, where humans and RAS-AI systems operate in tandem, there may be a latency in decision-making between humans and machines. This latency will be heightened in scaled operations, such as hyper-teaming operations, which involve greater numbers of digitally enabled systems.

Additionally, subject to the goal definition framework presented earlier in this article, cohesive coordination is dependent on the designation of time parameters. While goals should be defined with reference to specific time requirements, the interdependent nature of multi-agent teams may result in latency of human decision-making. Such delays will impact the capacity of RAS-AI systems to meet designated time requirements.

Maintaining Communication Structures

This article has highlighted the difference in communication structures between HST and HMT. For scaled operations, managing these communication structures will be critical to safety. The complexity of managing these communication structures increases with the number of agents—both human and non-human—within the team.

For hyper-teaming operations (which include both sets of communication structures), this complexity is exponentially increased. For safety-critical operations, communication will play a fundamental role in maintaining safety. Accordingly, managing communication structures is critical to safely scaling multi-agent teaming operations.

Framework for Safely Scaling Multi-Agent Teaming Operations

For the purposes of this analysis, five guiding principles have been adapted to support the safe scaling of multi-agent teaming operations. These principles draw upon a previous research study which presented a method for categorising physical and psychosocial safety risk for HMT operations.[66] As this article is more technically focused, the guiding principles have been amended. The original guiding principles, and how they have been amended, are presented in Table 2.

Table 2. Amended guiding principles
Original guiding principle Amended guiding principle Reasoning for amendment
Adaptability—understanding the capacity to which the human and the machine can adapt to their environment. Agency—technical understanding of the agentive composition of swarms in HST. The original principle is more focused on a capacity to act, whereas the amended principle is focused on a technical understanding of the composition of a swarm. The concepts are closely related but amended slightly due to the technical focus of this article.
Goal setting and goal actualisation—understanding how goals are determined and actualised for both humans and machines. Goals—defining goals and monitoring goal completion. Minor simplification of the label and additional focus on goal completion to capture the more technical considerations of goal setting and goal actualisation.
Communication—understanding how, what, why and when information is communicated between human and machine. Unchanged Unchanged
Ethics—understanding the ethical implications of humans operating in close proximity to a machine within specific environments. Information—identifying safety-critical information and determining methods of assurance for that information. Ethics is considered out of scope of this article. It is therefore omitted and replaced with information.
Trust—understanding how trust between the two entities influences decision-making. Interdependence—understanding interdependencies between actors and the critical points of these interactions. As mentioned in the introduction, trust is considered out of scope of this article. It is a concept which requires a study of its own. The terminology of trust has therefore been replaced by the closely related concept of interdependence.

These guiding principles are detailed below and applied to the concept of hyper-teaming in an Army context to demonstrate how they would be actioned in practice.

Agency

Understanding the agentive composition of swarms, and how goals are defined, is essential to safely integrating these systems into a broader teaming operation, such as Army’s hyper-teaming operation. The difference in how agency is conceptualised dictates how goals are defined, how communication is managed, how roles and responsibilities are distributed and the level of situational awareness within the operation.

When scaling multi-agent teaming operations, the following steps should be taken in consideration of agency:

  • Determine the agentive composition of the swarm.
  • If the swarm consists of multiple independent agents, define local goals using the goal definition method outlined in Section 4.
  • If the swarm consists of collective agency, define a global goal using the goal definition method outlined in Section 4.

Taking Army’s hyper-teaming operations as an example, if the swarm component of such an operation consisted of individual agents, there would be multiple goals which need to be monitored and achieved throughout the operation. However, if the swarm were to operate as a resilient collective agent, there would be only one global goal to monitor and achieve. Establishing this distinction from the beginning will support more streamlined and effective operations.

Goals

The effectiveness of a teaming operation is dependent on the system’s ability to achieve a desired state or goal, and that desired state or goal is the driving force for behaviour. Alignment of goals in multi-agent teaming operations is a form of coordination and is also a factor in system safety. Incorrectly defining goals can lead to unexpected system outputs and behaviours.

When scaling multi-agent teaming operations, the following steps should be taken in consideration of goals:

  • Define local and global goals using the goal definition method outlined earlier.
  • Determine a mechanism for monitoring goal completion.

The amalgam of many human and non-human agents can make the logistical organisation of an operation challenging. Goals, while essential to an operation, are also a strategic means of organising roles and responsibilities and monitoring the effectiveness of the operation over time.

Applying this principle to the Army’s hyper-teaming operation, clear goals for each platform are a prerequisite to the successful integration of multiple platforms at scale. Equally, commanders must understand how those goals contribute to the overarching goal of the whole operation. The goals will need to be defined prior to the operation, with methods for monitoring and measuring goal completion during and after the operation. Some of these measures may be technical and others may be fulfilled by a human actor. For hyper-teaming, the number of goals will scale in parallel with the scale of the operation. The interdependence of these goals may also dictate how the operation can be carried out. This may include, for example, the order in which actors fulfil their goals.

Information

Teaming operations involve a plethora of information analysed and shared among the many agents within the team. Determining what information is important to the teaming operation (particularly in the context of safety-critical operations) will be essential to the achievement of effective information management. Some information may require different levels of assurance or oversight depending on the context of the operation. Judgements of risk come into effect when determining what information is safety critical and what is not. Additionally, latency in decision-making is a relevant factor to consider for any information which needs human oversight or human assurance.

When scaling multi-agent teaming operations, the following steps should be taken in consideration of information:

  • Identify what information is safety critical or is related to safety-critical components of the operation.
  • Determine what assurances are required for that information and how they can be achieved within timeframes and processes which align with the operation.

In Army’s hyper-teaming operation, many of the autonomous agent’s functions may be deemed safety critical, requiring different levels of assurance, some human and others digital. Any form of assurance will require time to achieve. Even a few minutes can disturb an operation. When scaling operations, it is therefore important to consider how to achieve adequate assurance. For hyper-teaming operations, the existence of many different platforms will add an additional layer of complexity to this task due to differing expectations of assurance. For example, air and land platforms are designed and operated against different standards relevant to their industries. Balancing such standards against the information requirements of these platforms is essential if operations are to be safely scaled.

Communication

The structure, management and coordination of communication between agents in a teaming operation is fundamental to safe and effective operations. Teaming operations involve reciprocal and dynamic communication, as well as the requirement to operate within the changing and unpredictable environments common in military contexts. The complexity of managing and coordinating multiple types of communication across different platforms (whether verbal, signalling, text etc.) increases with the number of agents in the team. To safely scale multi-agent teams, there must be safe and effective communication methods supported by appropriate management structures.

When scaling multi-agent teaming operations, the following steps should be taken in consideration of communication:

  • Determine what communication structures are required within the operation.
  • Identify how communication is to be coordinated and managed throughout the operation.

For hyper-teaming operations which consist of both HST and HMT, determining communication structures is the first step in understanding how communication needs to be coordinated and managed throughout the operation. Given the interdependent nature of these operations, effective communication is essential to remedying unexpected or incorrect outputs or behaviours before they create a flow-on effect through the operation. This is particularly important for hyper-teaming operations which involve multiple platforms that may require multiple different communication mechanisms.

Interdependencies

Multi-agent teaming operations are achieved through interdependent agents cooperating to achieve a system goal. These interdependencies create flow-on effects between agents, some constructive and others disruptive. In order to safely scale teaming operations, these interdependencies must be understood by commanders. They must also appreciate the potential flow-on consequences of unexpected or incorrect outputs or behaviours from agents (both human and non-human) within the teaming operation. The team’s interdependencies will ultimately be shaped by judgements around risk, and the impacts of latency in decision-making.

When scaling multi-agent teaming operations, the following steps should be taken in consideration of interdependencies:

  • Map interdependencies across the teaming operation.
  • Identify critical points across this map, and measures to ensure these points are monitored and managed effectively.

There will be critical points of interdependence across any teaming operation in which one agent cannot fulfil their roles and responsibilities without the work of another agent. Equally, the ineffective operation of one agent contributes to significant repercussions for other agents. These points are accompanied by high risk and should be identified and appropriately managed. For hyper-teaming operations, these points will likely emerge across the different platforms, creating a greater emphasis on the importance of the guidelines discussed here.

Table 3 summarises the guidelines.

Table 3. Summary of the framework for safely scaling multi-agent teams
Guideline Steps
Agency

●       Determine the agentive composition of the swarm.

●       If the swarm consists of multiple independent agents, define local goals using the goal definition method outlined in Section 4.

●       If the swarm consists of collective agency, define a global goal using the goal definition method outlined in Section 4.

Goals

●       Define local and global goals using the goal definition method outlined in Section 4.

●       Determine a mechanism for monitoring goal completion.

Information

●       Identify what information is safety critical or is related to safety-critical components of the operation.

●       Determine what assurances are required for that information and how they can be achieved within timeframes and processes which align with the operation.

Communication

●       Determine what communication structures are required within the operation.

●       Identify how communication is to be coordinated and managed throughout the operation.

Interdependencies

●       Map interdependencies across the teaming operation.

●       Identify critical points across this map and measures to ensure these points are monitored and managed effectively.

Conclusion

Understanding the concepts relevant to teaming is pivotal in effectively determining how these operations need to be organised and managed, and specifically in determining their goals. This article has presented a method for defining goals for multi-agent teams, with the intention that it informs the actions of both human and non-human agents within a team. Multi-agent teaming has proven benefits for military operations; however, these operations are not without their challenges, particularly when implemented at scale. These challenges include judgements about the acceptability of risk, the impact of latency in decision-making, and the need to establish and maintain appropriate communication structures. With these key challenges in mind, this article has presented a framework for scaling multi-agent teams. The framework presented five guiding principles to support the safe scaling of multi-agent teaming operations: agency, goals, information, communication and interdependencies. The challenges of scaling multi-agent teams are nuanced and interwoven with one another, making mitigation of related risks less straightforward. While these potential risks are more complex to navigate, they are not insurmountable.

Endnotes

[1] C Haimson, CL Paul, S Joseph, R Rohrer and B Nebesh, ‘Do We Need “Teaming” to Team with a Machine?’, in D Schmorrow and C Fidopiastis (eds), Augmented Cognition (HCII 2019), Lecture Notes in Computer Science 11580 (Cham: Springer, 2019), pp. 169–178.

[2] JM Rickli, F Mantellassi and Q Ladetto, What, Why and When? A Review of the Key Issues in the Development and Deployment of Military HumanMachine Teams (Geneva: Geneva Centre for Security Policy, 2024).

[3] J Arquilla and D Ronfeldt, Swarming and the Future of Conflict (Santa Monica CA: RAND, 2000).

[4] Z Assaad, ‘A Risk-Based Trust Framework for Assuring the Humans in Human-Machine Teaming’, in Proceedings of the Second International Symposium on Trustworthy Autonomous Systems (New York: Association for Computing Machinery, 2024), pp. 1–9.

[5] A Neads, DJ Galbreath and T Farrell, From Tools to Teammates: Human-Machine Teaming and the Future of Command and Control in the Australian Army, Australian Army Occasional Paper No. 7 (Australian Army Research Centre, 2021).

[6] Mog Stapleton and Tom Froese, ‘Is Collective Agency a Coherent Idea? Considerations from the Enactive Theory of Agency’, in Catrin Misselhorn (ed.), Collective Agency and Cooperation in Natural and Artificial Systems, Philosophical Studies Series 122 (Cham: Springer, 2015), at: https://philpapers.org/archive/FROICA.pdf.

[7] Y Liu and KM Passino, Swarm Intelligence: Literature Overview (Department of Electrical Engineering, The Ohio State University, 2000), at: http://www2.ece.ohio-state.edu/~passino/swarms.pdf.

[8] Gerardo Beni and Jing Wang, ‘Swarm Intelligence in Cellular Robotic Systems’, in Paolo Dario, Giulio Sandini and Patrick Aebischer (eds), Robots and Biological Systems: Towards a New Bionics? (Berlin, Heidelberg: Springer,1993), pp. 703–712, at: https://link.springer.com/chapter/10.1007/978-3-642-58069-7_38.

[9] Erol Sahin, Swarm Robotics: From Sources of Inspiration to Domains of Application (Ankara: Department of Computer Engineering, Middle East Technical University, 2005), at: https://www.researchgate.net/profile/Erol-Sahin- 2/publication/221116606_Swarm_Robotics_From_Sources_of_Inspiration_to_Domains_of_Application/links/546f5abc0cf24af340c08749/Swarm-Robotics-From-Sources-of-Inspiration-to-Domains-of-Application.pdf.

[10] Gerardo Beni, ‘From Swarm Intelligence to Swarm Robotics’, in E Sahin and WM Spears (eds), Swarm Robotics: International Workshop on Swarm Robotics (SR 2004), Lecture Notes in Computer Science 3342 (Berlin, Heidelberg: Springer, 2005), pp. 1–9, at: https://doi.org/10.1007/978-3-540-30552-1_1.

[11] U.S. Department of Defense, ‘Department of Defense Announces Successful Micro-Drone Demonstration’, media release, 9 January 2017, at: https://www.defense.gov/News/Releases/Release/Article/1044811/department-of-defense-announces-successful-micro-drone-demonstration/; Mauro S Innocente and Paolo Grasso, ‘Self-Organising Swarms of Firefighting Drones: Harnessing the Power of Collective Intelligence in Decentralised Multi-Robot Systems’, Journal of Computational Science 34 (2019): 80–101, at: https://doi.org/10.1016/j.jocs.2019.04.009; Irving Lachow, ‘The Upside and Downside of Swarming Drones’, Bulletin of the Atomic Scientists 73, no. 2 (2017): 96–101, https://doi.org/10.1080/00963402.2017.1290879; Catrin Misselhorn, ‘Collective Agency and Cooperation in Natural and Artificial Systems’, in Catrin Misselhorn (ed.), Collective Agency and Cooperation in Natural and Artificial Systems, Philosophical Studies Series 122 (Springer, 2015), at: https://www.researchgate.net/profile/Oliver-Korn-2/publication/282649799_Ethical_Implications_Regarding_Assistive_Technology_at_Workplaces/links/561ba6f908aea80367242146/Ethical-Implications-Regarding-Assistive-Technology-at-Workplaces.pdf#page=15. Web page cannot be reached

[12] Nelson Minar, Roger Burkhart, Christopher Langton and Manor Askenazi, The Swarm Simulation System: A Toolkit for Building Multi-Agent Simulations (Santa Fe: Santa Fe Institute, 1996), at: http://sfi-edu.s3.amazonaws.com/sfi-edu/production/uploads/sfi-com/dev/uploads/filer/8a/2a/8a2ae001-9ad5-43e6-b7e3-4d951223e9e8/96-06-042.pdf.

[13] Phillip Walker, Saman Amirpour Amraii, Michael Lewis, Nilanjan Chakraborty and Katia Sycara, ‘Control of Swarms with Multiple Leader Agents’, in 2014 IEEE International Conference on Systems, Man, and Cybernetics (IEEE, 2014), at: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6974483&casa_token=NcNM-cTO-NwAAAAA:S3o4V8y5HgDEbl3iljDjr-TpcQtTabPRx5HQIPgM80ffbqqmxKRcD3pzu8Dwz8wiD7iZ0xfmfg.

[14] Kathleen Giles and Kristin Giammarco, ‘A Mission-Based Architecture for Swarm Unmanned Systems’, Systems Engineering 22, no. 3 (2019): 271–281, at: https://doi.org/10.1002/sys.21477.

[15] Dario Albani, Joris Ijsselmulden, Ramon Haken and Vito Trianni, ‘Monitoring and Mapping with Robot Swarms for Agricultural Applications’, in 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (IEEE, 2017), pp. 1–6, at: https://doi.org/10.1109/AVSS.2017.8078478; Riccardo Zanol, Federico Chiariotti and Andrea Zanella, ‘Drone Mapping Through Multi-Agent Reinforcement Learning’, in 2019 IEEE Wireless Communications and Networking Conference (IEEE, 2019), pp. 1–7, at: https://doi.org/10.1109/WCNC.2019.8885873.

[16] Neil Eliot, David Kendall, Alun Moon, Michael Brockway and Martyn Amos, ‘Void Reduction in Self-Healing Swarms’, in ALIFE 2019: The 2019 Conference on Artificial Life (London: The MIT Press, 2019), pp. 87–94, at: https://nrl.northumbria.ac.uk/id/eprint/39363/9/isal_a_00146.pdf; Yuan-Shun Dai, Michael Hinchey, Manish Madhusoodan, James L Rash and Xukai Zou, ‘A Prototype Model for Self-Healing and Self-Reproduction in Swarm Robotics System’, in IEEE International Symposium on Dependable, Autonomic and Secure Computing (IEEE, 2006), pp. 3–10, at: https://doi.org/10.1109/DASC.2006.10; Vivek Shankar Varadharajan, David St-Onge, Bram Adams and Giovanni Beltrame, ‘Swarm Relays: Distributed Self-Healing Ground-and-Air Connectivity Chains’, IEEE Robotics and Automation Letters 5, no. 4 (2020): 5347–5354, at: https://doi.org/10.1109/LRA.2020.3006793.

[17] KA Roundtree, MA Goodrich and JA Adams, ‘Transparency: Transitioning from Human-Machine Systems to Human-Swarm Systems’, Journal of Cognitive Engineering and Decision Making 13, no. 3 (2019): 171–195, at: https://doi.org/10.1177/1555343419842776.

[18] R Jay Shively, Joel Lachter, Summer L Brandt, Michael Matessa, Vernol Battiste and Walter W Johnson, ‘Why Human-Autonomy Teaming?’, in Carryl Baldwin (ed.), Advances in Neuroergonomics and Cognitive Engineering (Cham: Springer, 2018).

[19] James C Walliser, Ewart J de Visser, Eva Wiese and Tyler H Shaw, ‘Team Structure and Team Building Improve Human-Machine Teaming with Autonomous Agents’, Journal of Cognitive Engineering and Decision Making 13, no. 4 (2019): 258–278, at: https://doi.org/10.1177/1555343419867563; LL Thompson, Making the Team: A Guide for Managers, 6th Edition (Upper Saddle River NJ: Pearson, 2017), at: http://www.leighthompson.com/images/books/mtt/Table%20of%20Contents%20and%20Preface.pdf.

[20] JB Schmutz, LL Meier and T Manser, ‘How Effective Is Teamwork Really? The Relationship between Teamwork and Performance in Healthcare Teams: A Systematic Review and Meta-Analysis’, BMJ Open 9, no. 9 (2019), at: https://doi.org/10.1136/bmjopen-2018-028280.

[21] National Academies of Sciences, Engineering, and Medicine, Human-AI Teaming: State-of-the-Art and Research Needs (Washington DC: The National Academic Press, 2022), at: https://doi.org/10.17226/26355.

[22] Landon Johnson, What Is a System? (University of Texas Libraries, 2021), at: https://repositories.lib.utexas.edu/bitstream/handle/2152/111154/Johnsonpaper.pdf?sequence=2.

[23] GG Whitchurch and LL Constantine, ‘Systems Theory’, in PG Boss, WJ Doherty, R LaRossa, WR Schumm and SK Steinmetz (eds), Sourcebook of Family Theories and Methods: A Contextual Approach (New York: Plenum Press, 1993), at: https://link.springer.com/content/pdf/10.1007/978-0-387-85764-0_14.pdf.

[24] Neads, Galbreath and Farrell, From Tools to Teammates.

[25] JB Lyons, Katia Sycara, Michael Lewis and August Capiola, ‘Human–Autonomy Teaming: Definitions, Debates, and Directions’, Frontiers in Psychology 12 (2021): 19–32, at: https://doi.org/10.3389/fpsyg.2021.589585.

[26] Ibid.; AM Madni and CC Madni, ‘Architectural Framework for Exploring Adaptive Human-Machine Teaming Options in Simulated Dynamic Environments’, Systems 6, no. 4 (2018), at: https://doi.org/10.3390/systems6040044; Neads, Galbreath and Farrell, From Tools to Teammates; Walliser, de Visser and Shaw, ‘Team Structure and Team Building Improve Human-Machine Teaming with Autonomous Agents’.

[27] S Liaskos, SA McIlraith, S Sohrabi and J Mylopoulos, ‘Integrating Preferences into Goal Models for Requirements Engineering’, in 18th IEEE International Requirements Engineering Conference (IEEE Computer Society, 2010), pp. 135–144, at https://doi.org/10.1109/RE.2010.26.

[28] AV Lamsweerde, ‘From System Goals to Software Architecture’, in Marco Bernardo and Paola Inverardi (eds), Formal Methods for Software Architectures (Springer, 2003), at: http://dx.doi.org/10.1007/978-3-540-39800-4_2.

[29] Simon Garnier, Jacques Gautrais and Guy Theraulaz, ‘The Biological Principles of Swarm Intelligence’, Swarm Intelligence 1 (2007): 3–31, at: https://doi.org/10.1007/s11721-007-0004-y.

[30] Michael Lewis, ‘Human Interaction with Multiple Remote Robots’, Reviews of Human Factors and Ergonomics 9, no. 1 (2013): 131–174, at: https://doi.org/10.1177/1557234X13506688.

[31] Andreas Kolling, Phillip Walker, Nilanjan Chakraborty, Katia Sycara and Michael Lewis, ‘Human Interaction with Robot Swarms: A Survey’, IEEE Transactions on Human-Machine Systems 46, no. 1 (2016): 9–26.

[32] Ibid.

[33] Matthew Johnson, Micael Vignatti and Daniel Duran, ‘Understanding Human-Machine Teaming through Interdependence Analysis’, in Michael McNeese, Eduardo Salas and Mica R Endsley (eds), Contemporary Research: Models, Methodologies, and Measures in Distributed Team Cognition (Boca Raton FL: CRC Press, 2020).

[34] Kolling et al., ‘Human Interaction with Robot Swarms’.

[35] National Academies of Sciences, Engineering, and Medicine, Human-AI Teaming.

[36] Robert L Morasky, ‘Defining Goals—A Systems Approach’, Long Range Planning 10, no. 2 (1977): 85–89, at: https://doi.org/10.1016/0024-6301(77)90125-X.

[37] CE Rusbult and PAM Van Lange, ‘Interdependencies, Interaction and Relationships’, Annual Review of Psychology 54, no. 3 (2003): 51–75, at: https://doi.org/10.1146/annurev.psych.54.101601.145059.

[38] National Academies of Sciences, Engineering, and Medicine, Human-AI Teaming.

[39] Jean-Michel Hoc, ‘From Human–Machine Interaction to Human–Machine Cooperation’, Ergonomics 43, no. 7 (2010): 833–843, at: https://www.tandfonline.com/action/showCitFormats?doi=10.1080/001401300409044

[40] Beni and Wang, ‘Swarm Intelligence in Cellular Robotic Systems’.

[41] Jamie C Gorman, David A Grimm, Ronald H Stevens, Trysha Galloway, Ann M Willemsen-Dunlap and Donald J Halpin, ‘Measuring Realtime Team Cognition During Team Training’, Human Factors: The Journal of the Human Factors and Ergonomics Society 65, no. 52 (2020): 825–860.

[42] Terje Aven, ‘What Is Safety Science’, Safety Science 67 (2014): 15–20.

[43] ICAO, ICAO Annex 10 Aeronautical Telecommunications—Volume 1 Radio Navigation Aids (2006).

[44] MR Endsley and WM Jones, ‘Model of Inter and Intra Team Situation Awareness: Implications for Design, Training and Measurement’, in M McNeese, E Salsa and M Endsley (eds), New Trends in Cooperative Activities: Understanding System Dynamics in Complex Environments (Santa Monica CA: Human Factors and Ergonomics Society, 2001), pp. 46–67.

[45] Whitchurch and Constantine, ‘Systems Theory’.

[46] NJ Cooke, JC Gordon, CW Myers and JL Duran, ‘Interactive Team Cognition’, Cognitive Science 37, no. 2 (2013): 255–285.

[47] LA DeChurch and JR Mesmur-Magnus, ‘The Cognitive Underpinnings of Effective Teamwork: A Meta-Analysis’, Journal of Applied Psychology 95, no. 1 (2010): 32.

[48] Endsley and Jones, ‘Model of Inter and Intra Team Situation Awareness’.

[49] MR Endsley, ‘Theoretical Underpinnings of Situation Awareness’, in MR Endsley and DJ Garland (eds), Situation Awareness Analysis and Measurement (Boca Raton: CRC Press, 2000).

[50] Ibid.

[51] Innocente and Grasso, ‘Self-Organising Swarms of Firefighting Drones’.

[52] Morasky, ‘Defining Goals’.

[53] Whitchurch and Constantine, ‘Systems Theory’.

[54] BS Caldwell, RC Palmer and HM Cuevas, ‘Information Alignment and Task Coordination in Organizations: An “Information Clutch” Metaphor’, Information Systems Management 25, no. 1 (2008): 33–44.

[55] D Amodei, C Olah, J Steinhardt, P Christiano, J Schulman and D Mané, ‘Concrete Problems in AI Safety’, ArXiv preprint (2016), at: https://arxiv.org/pdf/1606.06565.pdf.

[56] L Chung, E Yu, B Nixon and J Mylopoulos, Non-Functional Requirements in Software Engineering (New York: Kluwer Academic, 2000).

[57] AV Lamsweerde and E Letier, ‘Handling Obstacles in Goal-Oriented Requirements Engineering’, IEEE Transactions on Software Engineering 26, no. 10 (2000), at: http://www-di.inf.puc-rio.br/~julio/TSE-Obstacles.pdf.

[58] C Haberstroh, ‘Control as an Organizational Process’, Management Science 6, no. 2 (1972): 165–171; RF Mager, Goal Analysis (Belmont CA: Fearon Publishers, 1972); Morasky, ‘Defining Goals’.

[59] D Amodei et al., ‘Concrete Problems in AI Safety’; Russell C Eberhart, Yuhui Shi and James Kennedy, Swarm Intelligence (Morgan Kaufmann, 2001), at: https://www.oreilly.com/library/view/swarm-intelligence/9781558605954/.

[60] A Newell and HA Simon, Human Problem Solving (Englewood Cliffs NJ: Prentice Hall, 1972); Joachim Wirth, Josef Kunsting and Detlev Leutner, ‘The Impact of Goal Specificity and Goal Type on Learning Outcome and Cognitive Load’, Computers in Human Behaviour 25, no. 2 (2009), at: https://doi.org/10.1016/j.chb.2008.12.004.

[61] S Erhel and E Jamet, ‘Improving Instructions in Educational Computer Games: Exploring the Relations between Goal Specificity, Flow Experience and Learning Outcomes’, Computers in Human Behavior 91 (2018), at: https://doi.org/10.1016/j.chb.2018.09.020; EA Locke and GP Latham, ‘New Directions in Goal-Setting Theory’, Current Directions in Psychological Science 15, no. 5 (2006): 265–268, at: https://doi.org/10.1111/j.1467-8721.2006.00449.x; CS Miller, JF Lehman and KR Koedinger, ‘Goals and Learning in Microworlds’, Cognitive Science 23, no. 3 (1999): 305–336, at: https://doi.org/doi:10.1207/s15516709cog2303_2; S Nebel, S Schneider, J Schledjewski and GD Rey, ‘Goal-Setting in Educational Video Games: Comparing Goal-Setting Theory and the Goal-Free Effect’, Simulation & Gaming 48, no. 1 (2016): 98–130, at: https://doi.org/10.1177/1046878116680869.

[62] Morasky, ‘Defining Goals’.

[63] JJ Sharples, GJ Cary, P Fox-Hughes, S Mooney, JP Evans, MS Fletcher, M Fromm, PF Grierson, R McRae and P Baker, ‘Natural Hazards in Australia: Extreme Bushfire’, Climatic Change 139 (2016): 85–99, at: https://doi.org/10.1007/s10584-016-1811-1.

[64] Noel C Joughin, ‘Engineering Considerations in the Tolerability of Risk’, The Journal of The Southern African Institute of Mining and Metallurgy 111 (2011): 535–540.

[65] RE Melchers, ‘On the ALARP Approach to Risk Management’, Reliability Engineering & System Safety 71, no. 2 (2001): 201–208.

[66] Z Assaad, ‘A Proposed Risk Categorisation Model for Human-Machine Teaming’, EICS ’22: Engineering Interactive Computing Systems Conference, June 21–24, Sophia Antipolis, France (CEUR Workshop Proceedings, 2022).