ABSTRACT
This short article introduces our Special Issue on ‘Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making’. We begin by stepping back and briefly commenting on the current military AI landscape. We then turn to the hitherto largely neglected prospect of AI-driven systems influencing state-level decision making on the resort to force. Although such systems already have a limited and indirect impact on decisions to initiate war, we contend that they will increasingly influence such deliberations in more direct ways – either in the context of automated self-defence or through decision-support systems that inform human deliberations. Citing the steady proliferation of AI-enabled systems in other realms of decision making, combined with the perceived need to match the capabilities of potential adversaries in what has aptly been described as an AI ‘global arms race’, we argue that this development is inevitable, will likely occur in the near future, and promises to be highly consequential. After surveying four thematic ‘complications’ that we associate with this anticipated development, we preview the twelve diverse, multidisciplinary, and often provocative articles that constitute this Special Issue. Each engages with one of our four complications and addresses a significant risk or benefit of AI-driven technologies infiltrating the decision to wage war.
What if intelligent machines determined whether states engaged in war? In one sense, this is merely the stuff of science fiction, or long-term speculation about how future technologies will evolve, surpass our capabilities, and take control. In another more nuanced sense, however, this is a highly plausible reality, compatible with the technologies that we have now, likely to be realised in some form in the near future (given observable developments in other spheres), and a prospect that we are willingly, incrementally bringing about.
This Special Issue addresses the risks and opportunities of the eminently conceivable prospect of AI intervening in decision making on the resort to force. Here we will step back and very briefly comment on the current military AI landscape before turning to this largely neglected domain of anticipated AI-enabled influence. We will then highlight four thematic ‘complications’ that we associate with the infiltration of AI-enabled technologies into the decision to wage war, before previewing the twelve diverse, multidisciplinary, and often provocative contributions that variously engage with them.
Current context
Artificial intelligence (AI)—the evolving capability of machines to imitate aspects of intelligent human behaviour—is already radically changing organised violence. AI has been, and is increasingly being, integrated into a wide range of military functions and capabilities. Official documentation released around the Australia, United Kingdom (UK), and United States (US) ‘AUKUS’ agreement, for example, has outlined a growing role for AI across advanced military capabilities, including a commitment to ‘Resilient and Autonomous Artificial Intelligence Technologies (RAAIT),’ under which ‘[t]he AUKUS partners are delivering artificial intelligence algorithms and machine learning to enhance force protection, precision targeting, and intelligence, surveillance, and reconnaissance.’ (AUKUS Defence Ministers Citation2023) Not only does this proliferation of AI across military capabilities have a profound impact on the utility, accuracy, lethality, and autonomy of weapon systems (see, for example, Scharre Citation2024), but the intersection of AI and advanced weapons systems is thought to have serious implications for the military balance of power (see, as one example, Ackerman and Stavridis Citation2024). Indeed, AI is seen as now essential to the quest for military advantage and figures centrally in current thinking about the preservation of military superiority. As one analysis has explained, AI ‘will enable the United States to field better weapons, make better decisions in battle, and unleash better tactics.’ (Buchanan and Miller Citation2017, 21.) In the words of former US Undersecretary of Defense, Michele Flournoy (Citation2023), ‘AI is beginning to reshape US national security’.
This growing importance and increasing exploitation of military AI has been accompanied by concern about the potential risks and adverse consequences that might result from its use. A world of AI-infused decision making and concomitant compressed timelines, coupled with intelligent automated—and even fully autonomous—weapons, brings dangers as well as military advantages. This has prompted attempts to develop rules and limits that could constrain at least some military applications of AI and seek to minimise dangerous or undesirable outcomes. Such regulation is not merely an academic concern. There is ample evidence that governments too are growing concerned about the broad risks associated with the use of AI in the military sphere. An international summit on ‘Responsible AI in the Military Domain,’ held in the Hague in February 2023, issued a ‘Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy’ that has been endorsed by more than 50 states as of February 2024 (US Department of State Citation2023). It comprises a list of desirable measures intended to promote the safe and prudent utilisation of military AI, including the proposition (relevant to a concern addressed throughout this Special Issue) that its use should take place ‘within a responsible human chain of command and control’ (US Department of State Citation2023).
In the context of announcing that Australia would join this Declaration in November 2023, Australian Defence Minister, the Hon Richard Marles MP, reiterated Australia’s commitment to ‘engage actively in the international agenda towards the responsible research, development, deployment and application of AI’ (Australian Government Citation2023). Moreover, in a joint statement following their October 2023 meeting in Washington, DC, US President Joe Biden and Australian Prime Minister Anthony Albanese affirmed that ‘States should take appropriate measures to ensure the responsible development, deployment, and use of their military AI capabilities, including those enabling autonomous functions and systems’ (Prime Minister of Australia Citation2023). Of course, in order meaningfully to take such measures, it is essential to understand—and anticipate—the full range of ways in which AI-enabled systems will be employed in a military context. It is the purpose of this Special Issue to enhance this understanding by addressing a hitherto largely unaddressed and (we suggest) emerging use of AI-driven systems.
Finally, we would be remiss not to acknowledge the recent attention—and concern—generated by the spectre of military AI alongside nuclear weapons. Indicative of the international apprehension prompted by this potential coupling, there was widely-reported speculation leading up to the November 2023 bilateral summit in San Francisco between Presidents Joe Biden and Xi Jinping that the US and China were ‘poised to pledge a ban on the use of artificial intelligence … in the control and deployment of nuclear warheads’ (South China Morning Post Citation2023; see also Porter Citation2023; Saballa Citation2023). No such pledge was made on the day, but, following the summit, both governments appeared to respond to the anticipation surrounding the predicted announcement with statements on the need for further talks between the US and China to discuss the risks of advanced AI systems (Lewis Citation2023; Ministry of Foreign Affairs of the People’s Republic of China Citation2023; White House Citation2023a; White House Citation2023b). The utilisation of AI in the realm of nuclear weapons has prompted considerable analysis and apprehension for the obvious reason that the potential stakes are so enormous (see, for example, Kaur Citation2024; Shaw Citation2023; and Parke Citation2023). As Depp and Scharre starkly observe, ‘[i]mproperly used, AI in nuclear operations could have world-ending effects’ (Depp and Scharre Citation2024). In its 2022 Nuclear Posture Review, the US proclaimed as policy that, without exception, humans will remain in the loop in any decisions involving the use of nuclear weapons (US Department of Defense Citation2022, 13). A basic worry is that AI could be integrated into nuclear command and control in ways that automate response capacities, possibly reinforcing deterrence but also raising risks of unwanted escalation or loss of control. This concern that the introduction of AI can create new vulnerabilities for nuclear command and control has led to calls for norms and guidelines intended to limit the nuclear instability and the threats to nuclear deterrence that could ensue (Avin and Amadae Citation2019).
In sum, the effect of AI on the performance of weapon systems, the conduct of military operations, and the vulnerabilities and strengths of military forces is of great importance. These developments have serious (if still uncertain) implications for the future of war, and have gripped the attention of academics, state leaders, and the general public alike. However, the intellectual ferment and policy deliberations inspired by the proliferation of AI-driven military tools have focused largely on the ways in which force will be employed (and transformed) as a result, rather than on the question of how this constellation of emerging technologies is likely to inform (and potentially transform) decision making on whether and when states engage in war. It is to this latter question that we turn.
A neglected prospect
The focus of academics and policy makers has been overwhelmingly directed towards the use of AI-enabled systems in the conduct of war. These include, prominently, the emerging reality of ‘lethal autonomous weapons systems’ (‘LAWS’—or, more colloquially and provocatively, ‘killer robots’) and decision support systems in the form of algorithms that rely on big data analytics and machine learning to recommend targets in the context of drone strikes and bombings (such as those by Israel in Gaza that have generated recent attention (Abraham Citation2024; Davies, McKernan, and Sabbagh Citation2023)). By contrast, we seek to address the relatively neglected prospect of employing AI-enabled tools at various stages and levels of deliberation over the resort to war.Footnote1
In other words, our focus in this Special Issue—and in the broader project from which it emerges—takes us from AI on the battlefield to AI in the war-room. We move from the decisions of soldiers involved in selecting and engaging targets (as well as authorising and overseeing the selection and engagement of targets by intelligent machines) to state-level decision making on the very initiation of war and military interventions; from jus in bello to jus ad bellum considerations (in the language of the just war tradition); and from actions adjudicated by international humanitarian law to actions constrained and condoned by the United Nations (UN) Charter’s prohibition on the resort to force and its explicit exceptions.
This shift in focus, at this particular point in time, is crucial. It anticipates what we believe is an inevitable change in how states will arrive at the consequential decision to go to war. We base our prediction that AI will infiltrate resort-to-force decision making in part on the steady proliferation of AI-driven systems—including predictive, machine-learning algorithms—to aid decision making in a host of other realms. Such systems are relied upon for everything from recruitment, insurance decisions, medical diagnostics in hospitals, and the allocation of welfare, to policing practices, support in the cockpits of commercial airplanes, and judgements on the likelihood of recidivism. In short, human decision making is becoming more and more reliant on the assistance of AI. In addition, the need to match the capabilities of potential adversaries in the increasingly high-speed, always high-stakes context of war fuels what has aptly been called the latest ‘global arms race’ (Simonite Citation2017). Although AI-enabled systems currently have only a limited and indirect role in state-level decision making on the resort to force, we are convinced that they will progressively influence such deliberations in more direct ways. By examining the prospect of AI gradually intervening in resort-to-force decision making now, it is possible to identify benefits and risks of using these technologies while there is still time to find ways, respectively, to enhance or to mitigate them.
The gravity of these considerations is difficult to overstate. As Ashley Deeks, Noam Lubell, and Daragh Murray (Citation2019, 16) have provocatively posed,
[i]f the possibility that a machine might be given the power to ‘decide’ to kill a single enemy soldier is fraught with ethical and legal debates, what are we to make of the possibility that a machine could ultimately determine whether a nation goes to war, and thus impact thousands or millions of lives?
Of course, an intelligent machine ‘determining’ whether a state engages in war could mean different things. Bracketing science fiction scenarios and long-term futuristic speculation, there are two ways that current AI-driven systems could conceivably impact resort-to-force decision making. First, AI-enabled decision support systems could be used to inform deliberations on whether to engage in war. In such a scenario, human decision makers would draw on algorithmic recommendations and predictions to reach decisions on the resort to force. This is already beginning to happen, at least indirectly, with respect to the AI-aided collection and analysis of intelligence, which makes its way up organisational hierarchies and chains of command. Alternatively, AI-driven systems could themselves calculate and implement decisions on the resort to force, such as, conceivably, in the context of defence against cyber attacks. Moreover, worrying suggestions of an AI-driven automated nuclear response to a first strike have also been mooted—and threatened—particularly in the case of a decapitation attack. (It has been reported that the Soviet Union bequeathed to Russia a ‘dead hand’ launch system, and that it is still in place, so this is not an unthinkable possibility. The Russian ‘Perimeter’ system is described in Depp and Scharre Citation2024; see also Andersen Citation2023, 12.) In such cases, a course of action would be determined and implemented by an AI-enabled autonomous system—with or without human oversight. Both types of scenario entail foreseeable (and likely near-future) developments that demand immediate attention.
Four complications
For all the potential benefits of these AI-driven systems—which are variously able to analyse vast quantities of data, make recommendations and predictions by uncovering patterns in data that human decision makers cannot perceive, and respond to potential attacks with a speed and efficiency that we could not hope to match—challenges abound. The workshop that led to this special issue, ‘Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making,’ held at the Australian National University (ANU), 29–30 June 2023, set out to address four thematic ‘complications’ that we proposed would accompany the integration of AI-enabled systems in state-level decision making on the resort to force.
Complication 1 relates to the displacement of human judgement in AI-driven resort-to-force decision making and possible implications for deterrence theory and the unintended escalation of conflict. When programmed to recommend—or independently calculate and implement—a response to a particular set of circumstances, intelligent machines will behave differently than human agents. This difference could challenge our understandings of deterrence. Current perceptions of a state’s willingness to resort to force are based on assumptions of human judgement, resolve, and forbearance rather than machine-generated outputs. Moreover, AI-enabled systems delegated the task of independently responding to aggression in certain contexts would make and implement decisions at speeds impossible for human actors, thereby accelerating decision cycles. They would also seem likely to misinterpret human signalling (on the desire to de-escalate a conflict, for example). Both factors could contribute to inadvertent, and potentially catastrophic, escalations in the resort to force in scenarios where human decision makers would have exercised restraint (see, for example, Wong et al. Citation2020, chapters 7, 8).
Complication 2 highlights the possible implications of automation bias. Empirical studies show that individuals and teams that rely on AI-driven systems often experience ‘automation bias’, or the tendency to accept without question computer-generated outputs (Cummings Citation2006; Citation2012; Mosier and Fischer Citation2010; Mosier and Manzey Citation2019; Skitka, Mosier, and Burdick Citation1999). This tendency can make human decision makers less likely to use their own expertise and judgement to test machine-generated recommendations. Detrimental consequences of automation bias include the acceptance of error, the de-skilling—including ‘moral deskilling’ —of human actors (Vallor Citation2013), and, as one of us has argued in this collection and elsewhere, the promotion (alongside other factors) of ‘misplaced responsibility’ in war, or the dangerous misperception that intelligent machine can bear moral responsibility for what are necessarily human decisions and their outcomes (Erskine Citation2024a, 551–554; Citation2024b).
Complication 3 confronts algorithmic opacity and its implications for democratic and international legitimacy. Machine learning processes are frequently opaque and unpredictable. Those who are guided by them often do not comprehend how predictions and recommendations are reached, and do not grasp their limitations. This current lack of transparency in much AI-driven decision making has led to negative consequences across a range of contexts (Knight Citation2017; Pasquale Citation2016; Citation2017; Vogel et al. Citation2021). As a government’s democratic and international legitimacy require a compelling and accessible justification for the decision to resort to war, this lack of transparency poses grave concerns when machines inform, or independently calculate and implement, such courses of action.
Complication 4 addresses the likelihood that AI-enabled systems would exacerbate organisational decision-making pathologies. Studies in both International Relations (IR) and organisational theory reveal the existing complexities and ‘pathologies’ of organisational decision making (within IR, see for example Barnett and Finnemore Citation1999). AI-driven decision-support and automated systems intervening in these complex structures risk magnifying such problems. Their contribution to decisions at the national—and even intergovernmental—level could distort and disrupt strategic and operational decision-making processes and chains of command.
These proposed complications are explored in the twelve articles that follow in the context of either automated self-defence or the use of AI-driven decision-support systems to inform human resort-to-force deliberations. The articles explore how best to approach these complications, with each article identifying a risk or opportunity of using AI-enabled systems in one of these contexts, asking how the risk can be mitigated or the opportunity promoted, and, sometimes, suggesting that an ostensible ‘complication’ is overstated and in no need of redress.Footnote2
Contributions
Significantly (and perhaps unusually), this collective attempt to grasp the potential hazards and benefits of employing AI-driven systems to contribute to the decision to wage war draws on a range of disciplines. These interventions are variously made from the perspectives of political science, IR, law, computer science, philosophy, sociology, psychology, and mathematics.
The volume begins with Ashley Deeks (Citation2024) anticipating states being increasingly tempted (given the prospect of hypersonic attacks) to allow AI-driven systems to make autonomous judgements on the initiation of force in particular cases. Observing that this use of autonomy would entail effectively ‘delegating’ consequential resort-to-force decision making to machines, Deeks raises crucial legal and normative questions about such ‘machine delegation’, turning to the US legal system to interrogate whether and how it could be justified. Benjamin Zala (Citation2024) continues to examine this weighty possibility of dismissing humans from the resort-to-force decision-making ‘loop’ in certain circumstances by addressing the specific high-stakes scenario of using AI and machine learning in nuclear command and control systems. Zala warns of two routes by which AI-enabled systems would increase a state’s incentive to strike first (with either nuclear or strategic non-nuclear weapons): automation in military deployment; and the introduction of AI-informed human decision making in relation to early-warning threat assessment. In both cases, he argues that a loss of human caution and forbearance would be pivotal. Marcus Holmes and Nicholas J. Wheeler (Citation2024) intervene in the discussion with a very different approach to the potential role of AI in nuclear crisis management. While Zala’s main focus is on the risks of AI-enabled systems in this context, Holmes and Wheeler turn their attention optimistically to the opportunities. Although they explicitly reject any notion that AI-driven systems should be allowed to operate nuclear command and control, they maintain that these systems could valuably enhance human decision making in such scenarios. Acknowledging that AI lacks emotional intelligence, they nevertheless provocatively propose that AI-enabled technologies offer opportunities to foster empathy, trust, and what they call ‘security dilemma sensibility’.
Also addressing these seemingly more innocuous cases where AI-enabled tools supplement rather than supplant human decision making on the resort to force, Toni Erskine (Citation2024b) argues that our interaction with such decision-support systems threatens to undermine our adherence to international norms of restraint in war. Erskine identifies two sources of this detrimental effect: our tendency to mistake AI-enabled tools for responsible agents in their own right (‘the risk of misplaced responsibility’); and our misperception of the outputs produced by these tools (‘the risk of predicted permissibility’). Each misstep, she argues, not only makes the initiation of war seem more permissible in particular cases, but also collaterally chips away at the hard-won international norms themselves. In line with Erskine’s first proposed risk, Mitja Sienknecht (Citation2024) is also concerned with how human-machine decision making on the resort to force complicates attributions of responsibility. Yet, she raises a different problem: ‘responsibility gaps’ when decisions are informed or made by AI-enabled systems to which responsibility cannot coherently be apportioned. In response, Sienknecht introduces the intriguing concept of ‘proxy responsibility’, which acknowledges the political, military, and economic structures that surround AI-influenced decision making on the resort to force and seeks to provide a pragmatic way of attributing responsibility for machine actions to human agents.
Frequently accompanying such complex questions of responsibility attribution are discussions of how the human actor is—and should be—situated in relation to AI-enabled systems when it comes to decision making. Indeed, questions of where, whether, and why humans should be in the war-initiation decision-making ‘loop’ alongside intelligent machines are returned to and debated throughout this volume—often amidst claims to the irreplaceable virtues and capacities of human actors. Jenny L. Davis (Citation2024) offers a novel take on these debates—and an added stipulation to the common call for ‘meaningful human control’—by focusing on the type of human actor that should be tasked with interpreting and implementing AI-driven outputs in resort-to-force deliberations (and other high-stakes scenarios). It is not enough to simply demand ‘humans in the loop’. Rather, she argues that we need ‘experts-in-the-loop’, a conclusion that implies imperatives to employ, support, and provide ongoing professional training to human practitioners. Maurice Chiodo, Dennis Müller, and Mitja Sienknecht (Citation2024) concur with Davis on the importance of training and educating human actors, but turn their attention to the education of AI developers. They begin with the assumption that responsible military AI development is needed in order to mitigate the sorts of risks of integrating AI technology into resort-to-force decision making identified by the other contributors. Focusing on the need to provide developers with clear training on ethical issues as a way of mitigating detrimental path dependencies that lead to such risks, they propose an original educational framework (‘10 pillars of responsible AI development’) and emphasise the need for AI developers to be trained in how AI systems will actually be integrated into military processes.
Highlighting a point alluded to by a number of contributors, Sarah Logan (Citation2024) addresses the vital role that intelligence analysis plays in decision making on the resort to force. As AI becomes increasingly important to such analyses, she anticipates the accompanying dangers of our reliance on large language models (LLMs). Specifically, she cautions that generative AI (or algorithms that can be used to create new content and that draw on LLMs) will exacerbate informational ‘pathologies’ with which intelligence analyses are already afflicted: ‘information scarcity’ and ‘epistemic scarcity’. Explaining that these pathologies are compounded by the limited data available to train LLMs, she notes that Western governments, like Australia, face particularly detrimental constraints in accessing such data compared to authoritarian regimes such as China and Russia. Pivoting from Logan’s incisive account of AI-enabled tools as flawed providers of information and curators of selective knowledge to AI-enabled tools as means of directly augmenting human cognitive capacities, Karina Vold (Citation2024) returns more optimistically to the opportunities afforded by AI systems in resort-to-force decision making. Specifically, she extols the strategic military advantages that accompany what she calls ‘human-AI cognitive teaming’. While acknowledging that becoming too reliant on AI-enabled systems carries risks for both individual users and broader society (as explored by other contributors), Vold valuably highlights the role that algorithmic decision-support systems can play in enhancing the otherwise limited human capacities particularly important for state-level resort-to-force decision making: inter alia memory, attention and search functions, planning, communication, comprehension, quantitative and logical reasoning, navigation, and even (to return to Holmes and Wheeler’s provocation) emotion and self-control. Osonde A. Osoba (Citation2024) examines the integration of AI into what he calls ‘military decision-making ecosystems’ using two analytic frames: an artefact-level analysis focused on the technical properties of individual AI systems and a systems-level perspective aimed at highlighting broader institutional implications of AI use. Referring to Vold’s conception of AI-enabled cognitive enhancements, Osoba’s artefact-level analysis highlights the potential positive impacts that AI integration can have in terms of increasing what he intriguingly describes as ‘cognitive diversity’ in decision-making processes. Osoba then argues that both states and their national security institutions tasked with resort-to-force decision making qualify as complex adaptive systems. Based on this identification, he draws on dynamics observed in other stable complex systems to offer some sceptical assessments of concerns surrounding human deskilling and algorithmic transparency.
Reminiscent of Zala’s observation that intelligent machines lack the crucial human capacity to be moved (or, more aptly, constrained) by glimmers of doubt that would promote the exercise of caution in decision making on the resort to force, Neil Renic (Citation2024) forewarns the dulling of our ‘tragic imagination’ if we continue to allow machines to infiltrate this process. Renic compellingly argues that the ‘speed, inflexibility, and false confidence’ of AI-assisted decision making would risk fostering an insensitivity to what he identifies as the tragic qualities of violence—namely, its limits and unpredictability—as well as a denial of our own fallibility. As such, he maintains that some aspects of decision making must never be forfeited to AI-driven systems. Concluding the collection on a similarly cautionary note, Bianca Baggiarini (Citation2024), examines the potential dangers of ‘algorithmic reason’ in the context of decision making on the resort to force.Footnote3 She presents a powerful case that the very technologies that promise both certainty and decision making efficiency actually obscure what we can see and know through practices of ‘in-visibility, anonymity, and fragmentation’. Sharing Osoba’s scepticism of calls for algorithmic transparency, which she sees as woefully misguided, Baggiarini concludes by expressing concerns that AI-supported decision making on the resort to force is not compatible with democratic legitimacy.
The contributors to this Special Issue do not always agree. They reach different conclusions on the benefits or risks that will accompany states’ anticipated reliance on AI-enabled systems in resort-to-force decision making. They differ on the degree of optimism or pessimism with which this development should be approached. Moreover, they focus on divergent points at which AI will infiltrate these deliberative processes and address a range of contexts in which this is likely to have an impact. Nevertheless, the articles in this collection speak to each other and share a commitment to understanding a consequential and (we suggest) inevitable change in decisions to wage war. Each article represents a process of learning from the diverse perspectives brought together as part of this important, ongoing conversation. We hope that this collection prompts engagement, reflection, debate, and further research.
Acknowledgements
We are grateful to the Australian Government, Department of Defence for a two-year Strategic Policy Grant, which generously funded the international workshop from which this special issue has emerged. We would also like to thank Emily Hitchman for her invaluable assistance throughout every aspect of this project and the production of this Special Issue, including incisive feedback on individual articles, Fionn Parker for his editorial assistance while Summer Intern on this project in December 2023 and January 2024, and the superb editorial team at the Australian Journal of International Affairs (AJIA).
Disclosure statement
No potential conflict of interest was reported by the author(s).
Additional information
Funding
This work was supported by Department of Defence, Australian Government.
Notes on contributors
Toni Erskine
Toni Erskine is Professor of International Politics in the Coral Bell School of Asia Pacific Affairs at the Australian National University (ANU) and Associate Fellow of the Leverhulme Centre for the Future of Intelligence at Cambridge University. She is currently Chief Investigator of the two-year research project, ‘Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making’, funded by the Australian Government, Department of Defence. She recently served as Director of the Coral Bell School (2018–23) and Editor of International Theory: A Journal of International Politics, Law, and Philosophy (2019–23). She is the recipient of the International Studies Association’s 2024–2025 Distinguished Scholar Award in International Ethics.
Steven E. Miller
Steven E. Miller is Director of the International Security Program in the Belfer Center for Science and International Affairs at Harvard University’s Kennedy School of Government. He also serves as Editor-in-Chief of the scholarly journal, International Security, and co-editor of the International Security Program’s book series, BCSIA Studies in International Security (published by The MIT Press). Miller is a Fellow of the American Academy of Arts and Sciences, where he has long been a member of (and formerly chaired) the Committee on International Security Studies (CISS). He is also member of the Council and Chair of the Executive Committee of International Pugwash.
Notes
1 Important exceptions to this relative neglect include: Deeks, Lubell, and Murray (Citation2019); Wong et al. (Citation2020); and Horowitz and Lin-Greenberg (Citation2022).
2 Osonde A. Osoba (Citation2024), for example, offers the ‘non-intuitive insight’ that popular policy objectives such as mitigating human deskilling and enhancing algorithmic transparency are, in fact, ‘unnecessary or counterproductive’.
3 Baggiarini borrows the phrase ‘algorithmic reason’ from Aradau and Blanke (Citation2022),Previous article
References
- Abraham, Yuval. 2024. “‘Lavender’: The AI Machine Directing Israel’s Bombing Spree in Gaza.” +972 Magazine, April 3, 2023. https://www.972mag.com/lavender-ai-israeli-army-gaza/.Google Scholar
- Ackerman, Eliot, and James Stavridis. 2024. “Drone Swarms Are About to Change the Military Balance.” The Wall Street Journal, March 14, 2024.Google Scholar
- Andersen, Ross. 2023. “Never Give Artificial Intelligence the Nuclear Codes.” The Atlantic, June 11–15.Google Scholar
- Aradau, Claudia, and Tobias Blanke. 2022. Algorithmic Reason: The New Government of Self and Other. Oxford: Oxford University Press.Google Scholar
- AUKUS Defence Ministers. 2023. “AUKUS Defence Ministers Meeting Joint Statement.” December 1, 2023. https://www.defense.gov/News/Releases/Release/Article/3604511/aukus-defense-ministers-meeting-joint-statement/.Google Scholar
- Australian Government. 2023. “Australia joins declaration on safe and responsible artificial intelligence in the military.” November 3, 2023. https://www.minister.defence.gov.au/media-releases/2023-11-03/australia-joins-declaration-safe-and-responsible-artificial-intelligence-military.Google Scholar
- Avin, Sahar, and S. M. Amadae. 2019. “Autonomy and Machine Learning as Risk Factors at the Interface of Nuclear Weapons, Computers, and People.” In The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives, edited by Vincent Boulanin, 105–118. Stockholm: Stockholm International Peace Research Institute (SIPRI).Google Scholar
- Baggiarini, Bianca. 2024. “Algorithmic War and the Dangers of In-Visibility, Anonymity, and Fragmentation.” Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making. Special Issue of Australian Journal of International Affairs 78 (2) (this issue).Google Scholar
- Barnett, Michael N., and Martha Finnemore. 1999. “The Politics, Power, and Pathologies of International Organizations.” International Organization 53 (4): 699–732. https://doi.org/10.1162/002081899551048Web of Science ®Google Scholar
- Buchanan, Ben, and Taylor Miller. 2017. Machine Learning for Policymakers: What It Is and Why It Matters, The Cyber Security Project. Cambridge, MA: Belfer Center for Science and International Affairs, June 2017.Google Scholar
- Chiodo, Maurice, Dennis Müller, and Mitja Sienknecht. 2024. “Educating AI Developers to Prevent Harmful Path Dependency in AI Resort-to-Force Decision Making.” Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making, Special Issue of Australian Journal of International Affairs 78 (2) (this issue).Google Scholar
- Cummings, Melissa L. 2006. “Automation and Accountability in Decision Support System Interface Design.” Journal of Technology Studies 32 (1): 23–31. https://doi.org/10.21061/jots.v32i1.a.4.Google Scholar
- Cummings, Melissa L. 2012. “Automation Bias in Intelligent Time Critical Decision Support Systems.” American Institute of Aeronautics 1st Intelligent Systems Technical Conference. Chicago Illinois.Google Scholar
- Davies, Harry, Bethan McKernan, and Dan Sabbagh. 2023. “‘The Gospel’: How Israel Uses AI to Select Bombing Targets in Gaza.” The Guardian, December 1, 2023. https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets.Google Scholar
- Davis, Jenny. 2024. “Experts-in-the-Loop and Resort-to-Force Decision Making: Elevating Humanism in High-Stakes Automation.” Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making, Special Issue of Australian Journal of International Affairs 78 (2) (this issue).Google Scholar
- Deeks, Ashley. 2024. “Delegating War Initiation to Machines.” Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making, Special Issue of Australian Journal of International Affairs 78 (2) (this issue).Google Scholar
- Deeks, Ashley, Noam Lubell, and Daragh Murray. 2019. “Machine Learning, Artificial Intelligence, and the Use of Force by States.” Journal of National Security Law & Policy 10 (1): 1–25.Google Scholar
- Depp, Michael and Paul Scharre. 2024. “Artificial Intelligence and Nuclear Stability.” War on the Rocks, January 16, 2024.Google Scholar
- Erskine, Toni. 2024a. “AI and the Future of IR: Disentangling Flesh-and-Blood, Institutional, and Synthetic Moral Agency in World Politics.” Review of International Studies 50 (3): 534–559.Web of Science ®Google Scholar
- Erskine, Toni. 2024b. “Before Algorithmic Armageddon: The Erosion of Restraint as an Immediate Risk of AI Infiltrating the Decision to Wage War.” Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making, Special Issue of Australian Journal of International Affairs 78 (2) (this issue).Google Scholar
- Flournoy, Michele A. 2023. “AI Is Already at War: How Artificial Intelligence Will Transform the Military.” Foreign Affairs 102 (6): 56-60–62-69, November–December 2023.Google Scholar
- Holmes, Marcus, and Nicholas J. Wheeler. 2024. “The Role of Artificial Intelligence in Nuclear Crisis Decision Making: A Complement, Not a Substitute.” Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making, Special Issue of Australian Journal of International Affairs 78 (2) (this issue).Google Scholar
- Horowitz, Michael C., and Erik Lin-Greenberg. 2022. “Algorithms and Influence: Artificial Intelligence and Crisis Decision-Making.” International Studies Quarterly 66 (4): 1–11.Web of Science ®Google Scholar
- Kaur, Silky. 2024. “Artificial Intelligence and the Evolving Landscape of Nuclear Strategy.” The Equation, March 4, 2024.Google Scholar
- Knight, Will. 2017. “The Dark Secret at the Heart of AI.” MIT Technology Review, April 11, 2017. https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/.Google Scholar
- Lewis, Jeffrey. 2023. “The Militarized AI Risk That’s Bigger Than ‘Killer Robots’.” Vox, November 28, 2023. https://www.vox.com/future-perfect/2023/11/28/23972547/the-militarized-ai-risk-thats-bigger-than-killer-robots.Google Scholar
- Logan, Sarah. 2024. “Tell Me What You Don’t Know: Large Language Models and the Pathologies of Intelligence Analysis.” Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making, Special Issue of Australian Journal of International Affairs 78 (2) (this issue).Google Scholar
- Ministry of Foreign Affairs of the People’s Republic of China. 2023. “President Xi Jinping Meets with U.S. President Joe Biden.” November 16, 2023. https://www.fmprc.gov.cn/mfa_eng/zxxx_662805/202311/t20231116_11181442.html.Google Scholar
- Mosier, Kathleen M., and U. M. Fischer. 2010. “Judgment and Decision Making by Individuals and Teams: Issues, Models, and Applications.” Reviews of Human Factors and Ergonomics 6 (1): 198–256. https://doi.org/10.1518/155723410X12849346788822.Google Scholar
- Mosier, Kathleen L., and Dietrich Manzey. 2019. “Humans and Automated Decision Aids: A Match Made in Heaven?” In Human Performance in Automated and Autonomous Systems, edited by Kathleen L. Mosier and Dietrich Manzey, 19–42. Boca Raton, FL: CRC Press.Google Scholar
- Osoba, Osonde. 2024. “A Complex-Systems View of Military Decision Making.” Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making, Special Issue of Australian Journal of International Affairs 78 (2) (this issue).Google Scholar
- Parke, Melissa. 2023. “Preventing AI Nuclear Armageddon.” Project Syndicate, November 8, 2023.Google Scholar
- Pasquale, Frank. 2016. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.Google Scholar
- Pasquale, Frank. 2017. “Secret Algorithms Threaten the Rule of Law.” MIT Technology Review, June 1, 2017. https://www.technologyreview.com/2017/06/01/151447/secret-algorithms-threaten-the-rule-of-law/.Google Scholar
- Porter, Tom. 2023. “Biden and Xi Will Sign a Deal to Keep AI Out of Control Systems for Nuclear Weapons: Report.” Business Insider, November 13, 2023. https://www.businessinsider.com/biden-xi-deal-ai-out-nuclear-weapons-systems-apec-report-2023-11.Google Scholar
- Prime Minister of Australia. 2023. “United States-Australia Joint Leaders’ Statement – Building an innovation alliance.” October 25, 2023. https://www.pm.gov.au/media/united-states-australia-joint-leaders-statement-building-innovation-alliance.Google Scholar
- Renic, Neil. 2024. “Tragic Reflection, Political Wisdom, and the Future of Algorithmic War.” Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making, Special Issue of Australian Journal of International Affairs 78 (2) (this issue).Google Scholar
- Saballa, Joe. 2023. “Biden, Xi to Sign Deal Prohibiting AI in Drones, Nuclear Weapons.” Defensepost.com, November 15, 2023. https://www.thedefensepost.com/2023/11/15/biden-xi-prohibit-ai/.Google Scholar
- Scharre, Paul. 2024. “The Perilous Coming of Age of AI Warfare: How to Limit the Threat of Autonomous Weapons.” Foreign Affairs, February 29, 2024.Google Scholar
- Shaw, Douglas B. 2023. “Nuclear Deterrence: Unsafe at Machine Speed.” Arms Control Today, December 2023.Google Scholar
- Sienknecht, Mitja. 2024. “Educating AI Developers to Prevent Harmful Path Dependency in AI Resort-to-Force Decision Making.” Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making, Special Issue of Australian Journal of International Affairs 78 (2) (this issue).Google Scholar
- Simonite, Tom. 2017. “For Superpowers, Artificial Intelligence Fuels a New Global Arms Race.” Wired, September 8, 2017. https://www.wired.com/story/for-superpowers-artificial-intelligence-fuels-new-global-arms-race/.Google Scholar
- Skitka, Linda J., Kathleen L. Mosier, and Mark Burdick. 1999. “Does Automation Bias Decision-Making?” International Journal of Human–Computer Studies 51 (5): 991–1006. https://doi.org/10.1006/ijhc.1999.0252.Web of Science ®Google Scholar
- South China Morning Post. 2023. “Biden, Xi set to pledge ban on AI in autonomous weapons like drones, nuclear warhead control: sources.” November 11, 2023. https://finance.yahoo.com/news/biden-xi-set-pledge-ban-093000535.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuYnVzaW5lc3NpbnNpZGVyLmNvbS8&guce_referrer_sig=AQAAANU6w9hfnws6ILZy6ON7M9wCJptISM4S3tgz3H1z3mZ3FusrKEXRCsSvNYxyzcUVenHXeUkoYEfQw6ob3k61PU7aelz8yn5ouIEncwVbM8kjD0F35X-4QbrSe45UbUlUf2B3Cc-Snf5aDwgxRoFZhFWhAYlPZfpYmvxki_ocIkLI.Google Scholar
- US Department of Defense. 2022. Nuclear Posture Review. https://media.defense.gov/2022/Oct/27/2003103845/-1/-1/1/2022-NATIONAL-DEFENSE-STRATEGY-NPR-MDR.PDF.Google Scholar
- US Department of State. 2023. “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. November 9, 2023. https://www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy-2/Google Scholar
- Vallor, Shannon. 2013. “The Future of Military Virtue: Autonomous Systems and Moral Deskilling in the Military Profession.” In 2013 5th International Conference on Cyber Conflict (CyCon 2013): Proceedings, edited by Karlis Podens, Jan Stinissen, and Markus Maybaum, 471–486. Tallinn, Estonia: NATO CCDCOE.Google Scholar
- Vogel, Kathleen M., Gwendolynne Reid, Christopher Kampe, and Paul Jones. 2021. “The Impact of AI on Intelligence Analysis: Tackling Issues of Collaboration, Algorithmic Transparency, Accountability, and Management.” Intelligence and National Security 36 (6): 827–848.Web of Science ®Google Scholar
- Vold, Karina. 2024. “Human-AI Cognitive Teaming: Using AI to Support State-Level Decision Making on the Resort to Force.” Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making, Special Issue of Australian Journal of International Affairs 78 (2) (this issue).Google Scholar
- White House. 2023a. “Readout of President Joe Biden’s Meeting with President Xi Jinping of the People’s Republic of China.” November 15, 2023. https://www.whitehouse.gov/briefing-room/statements-releases/2023/11/15/readout-of-president-joe-bidens-meeting-with-president-xi-jinping-of-the-peoples-republic-of-china-2/.Google Scholar
- White House. 2023b. “Remarks by President Biden in a Press Conference, Woodside, CA.” November 16, 2023. https://www.whitehouse.gov/briefing-room/speeches-remarks/2023/11/16/remarks-by-president-biden-in-a-press-conference-woodside-ca/.Google Scholar
- Wong, Yuna Huh, John Yurchak, Robert W. Button, Aaron B. Frank, Burgess Laird, Osonde A. Osoba, Randall Steeb, Benjamin N. Harris, and Sebastian Joon Bae. 2020. Deterrence in the Age of Thinking Machines. Santa Monica, CA: RAND Corporation.Google Scholar
- Zala, Benjamin. 2024. “Should AI Stay or Should AI Go? First Strike Incentives and Deterrence Stability.” Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making, Special Issue of Australian Journal of International Affairs 78 (2) (this issue).Google Scholar
Excerpts: Australian Journal of International Affairs, Published online: 31 May 2024.
COMMENTS