X Welcome to International Affairs Forum

International Affairs Forum a platform to encourage a more complete understanding of the world's opinions on international relations and economics. It presents a cross-section of all-partisan mainstream content, from left to right and across the world.

By reading International Affairs Forum, not only explore pieces you agree with but pieces you don't agree with. Read the other side, challenge yourself, analyze, and share pieces with others. Most importantly, analyze the issues and discuss them civilly with others.

And, yes, send us your essay or editorial! Students are encouraged to participate.

Please enter and join the many International Affairs Forum participants who seek a better path toward addressing world issues.
Mon. December 02, 2024
Get Published   |   About Us   |   Donate   | Login
International Affairs Forum

Around the World, Across the Political Spectrum

Would Artificial Intelligence change the basis of strategy?

Comments(0)

Introduction

Artificial Intelligence (AI) revolution has brought dramatic transformations across sectors like security, economy, healthcare, and education. This essay aims to scrutinise how AI affects the basis of strategy. Firstly, this essay will describe AI using existing overarching interpretations and shed light on its competencies by subdividing it into its varieties – artificial narrow intelligence (ANI), artificial general intelligence (AGI), and superintelligence. Secondly, it will define strategy underscoring the Clausewitzian perspective of strategy as “the use of the engagements for the purpose of the war.”[1] Thereafter, it will peruse the fundamental effects of ANI on strategy, followed by a long-view consideration of possible changes that AGI can bring to the basis of strategy. Finally, it will conclude that, in contrast to AGI, which, if it exists, could alter both the nature and strategy of war, ANI would have a serious impact on the nature of warfare with very little change to the strategy making that would remain a human preserve.

What is AI?

Human intelligence is considered the ability to achieve one’s goals through “a combination of analytical, creative, and practical abilities.”[2] Although the term AI has many loose interpretations, Bellman’s description of it as a “process of automation of certain tasks, such as making decisions and learning, that is a reflection of human intelligence”[3] can be taken as a starting point to analyse AI’s impact on strategy.

Presently, the AI systems functioning around us are varieties of ANI. They are limited to utilizing algorithms to attend to singular tasks like voice recognition, driving a car, or image classification. ANI, through reinforcement learning, achieves human-level or better capabilities in pre-determined fields. They are effective at accomplishing a specific task, but cannot flexibly acclimatize to execute different tasks or modify their functioning to face a novel challenge.[4] AGI, by contrast, represents a broad intelligence that can attain dynamic human-like cognitive skills equipped to flexibly employ its learning in multiple spheres, autonomously switch between tasks, and address unprecedented challenges.[5] Although Google’s DeepMind introduced a generalist AI, Gato, which can carry out a wide range of activities from stacking blocks to captioning images using the same algorithm[6], developing AGI with capabilities in matching or even exceeding human intelligence seems a distant prospect currently. Moreover, it is expected that after AGI is developed, a point of “singularity” in AI will be achieved, eventually resulting in superintelligence that will surpass human-level intelligence.

What is Strategy?

Betts focussed on the deliberative process of strategy and claimed that strategy represents the “link between military means and political ends,” the rational scheme for converting the former to the latter.[7] Gray also affirmed that strategy is “the attempted achievement of desired political ENDS, through the choice of suitable strategic WAYS, employing largely the military MEANS then available or accessible.”[8] Due to its complex and multifaceted nature, there are many interpretations of strategy. But at its core, the basis of strategy represents systematic objective setting and meticulous pursuits of contextual political goals by making considered choices involving, amongst many, military resource allocation, dynamic prioritization, and pliable trade-offs.

ANI and Strategy

Technological developments have previously brought “Revolutions in Military Affairs” and Payne asserts that AI will inevitably impact the nature and conduct of war by affecting the factors behind human decision making.[9] Many AI autonomous drones with networked decision-making ability could launch a swarming sortie that could potentially overwhelm and outwit an adversary's sophisticated air defence system more efficiently than human-controlled vehicles.[10] An artificial narrow intelligence (ANI) controlled virtual fighter jet beating an F-16 human pilot in DARPA’s simulated dogfight shows how ANI’s reinforcement training can be expected to soon outperform humans in real-life dogfights.[11] The startling insight of Clausewitz’s strategic theory that defence is a stronger method of war than offence is being challenged by ANI’s potential. The speed associated with ANI, its quick cycling of the OODA loop (observe, orient, decide, and act), incredible precision, and capability of replacing humans on the front line can benefit offensive capabilities immensely affecting the offence-defence balance with far-reaching repercussions for the deterrence in the international order.[12] ANI’s tactical advantages allow audacious goals like the remote assassination of an adversary, even if casus belli, to be opted for by a state’s belligerent decision-makers having necessary ANI weapons. The goal of strategy to “get more out of a situation than the starting balance of power would suggest”[13] is surely not changed by AI, but it definitely can affect the selection of the political goals that remain intertwined with the available AI capabilities.

In 2015, DeepMind’s AlphaGo AI defeated the world champion at Go, the popular board game, by employing a Monte Carlo tree search algorithm to determine its moves. It previously gained knowledge about the game utilizing an artificial neural network during comprehensive practice with both human and AI opponents.[14] During the gameplay, experienced observers noted that AlphaGo occasionally made moves radically different and uncharacteristic of humans. Payne says AI can draw parallels and inferences from data, conceivably invisible to humans.[15] Thus, ANI’s unanticipated and perceptive insights to humans might drastically affect strategy shaping.

There are always physical and psychological aspects of strategizing, where human decision-makers are impacted by factors like cognitive load, time constraints, stress, and fatigue.[16] During the Vietnam War, President Johnson’s colleagues thought he was depressed, whereas the next President, Nixon, was prone to unrestrained anger.[17] Here, ANI can provide decisive inputs to decision-makers while avoiding human cognitive heuristics and skewed risk judgement, which humans are susceptible to when under duress.

But there also can be a tendency where defence planners may start to view AI-generated observations as comparable (or even superior) to those made by humans. This automation bias or over-reliance on automation in military decision-making will probably foster circumstances leading to strategic instability in the absence of human intuitiveness, judgment, and accountability.[18] Army investigators discovered automation bias to be a contributing factor in the 2003 Patriot fratricides, in which two friendly aircraft were shot down by Army Patriot air and missile defence operators during the initial stages of the Iraq War. Investigators found that in both cases, operators relied on the (inaccurate) signals their automated radar systems were sending them even though operators were “in the loop” and had the final say regarding whether or not to fire.[19]

However, unlike a game with well-defined problem sets, Clausewitz claimed “war is the realm of uncertainty,” where situational vagueness creates a fog.[20] Yarger applied chaos theory to strategy and called strategic environments “chaotic” systems sensitive to variabilities in initial conditions and subject to seemingly random and unpredictable behaviour of the adversary. Yarger underscored the flexible, and contextual experience, anticipation and insights of the human element as the success factor behind strategy.[21] For military strategy, ANI can provide tactical advantages, but human genius is necessary to guide AI through the perilous and imperfect knowledge available in strategic environments. In the case of intelligence, surveillance, and reconnaissance (ISR), through constant monitoring of numerous data feeds and identification of patterns of interest, AI can lessen the burden associated with data processing,[22] as was the case with AI-based Project Maven that Pentagon used against ISIS.[23] But the judgement aspect of strategy, even during Project Maven, must inevitably require human intelligence.

Moreover, Clausewitz deemed strategy to incorporate “human passions, values, and beliefs,” which most certainly are unquantifiable for an ANI.[24] Yarger also contended that strategy is “essentially a human enterprise” and suggested that human emotion, ideology, and culture influence dynamic policy-led goals and the strategy schemed for achieving them.[25] Bostrom says an AI will have “convergent instrumental” values where the AI targets sub-goals that make it easier to achieve the final goal of the strategy.[26] Human response associated with issues of ‘reputation’ and ‘credibility’ may make a decision-maker veer from coercion to diplomacy.[27] AI’s instrumental values are more likely to be detached and rational than in sync with human values and ethics, thus raising the pertinent point about the human-in-the-loop approach being imperative.

Even so, ANI's tactical importance will inexorably extend into the strategic sphere as ANI will dramatically tip the balance of power in its possessor's favour in many ways. Payne believes this would result in a neorealist notion of security dilemma and, therefore, a new arms race between great powers and small nations bandwagoning with greater ones.[28] 

As nations race to achieve AI hegemony, governments rely on private companies like Google and IBM, as they do for many other dual-use technologies, for underlying R&D and expertise necessary for AI to be used for military purposes. In practice, this means that many nations will use the same international supply chains to support their military AI ambitions, potentially leading to competitive conflicts of interest[29] and affecting the course of achieving military means necessary for fulfilling political ends. Meanwhile, an authoritarian state like China benefits as it is not hindered by the separation between public and private and ensures that its private sector’s AI progress is committed to advancing military might and the coupled political goals.

AGI and Strategy

As Artificial General Intelligence (AGI) does not yet exist, there will unavoidably be a speculative nature to any analysis of AGI and strategy. AGI is anticipated to be an autonomous agent with unsupervised learning capabilities able to apply it in diverse realms. AGI is capable of becoming the first non-biological entity competent in strategic thought.  Because AGI’s underlying cognitive action is substantially different from humans,[30] if humans delegate strategic functions to it, AGI can drastically change the basis of strategy by producing hitherto unpredictable actions and schemes.  

General John Allen says AGI can bring on a hyper-war, a war at the speed of light, in which the observe-orient-decide-act (OODA) loop would almost entirely lack human decision-making.[31] Payne argues that even if AGI were to have functional characteristics close to human cognition, it would still lack a human-like assessment of what is good enough in terms of outcomes and a capability to reconcile its goal with alternative possibilities.[32] For example, in a Cuban Missile Crisis like situation, an AGI might approach the event with eyes on full-fledged military attack, instead of Kennedy-style caution consisting of blockage and focus on diplomacy.[33] Similarly, fast acting AGI capable of adaptive thinking and wise to a diverse data set may feel confident to completely undercut defensive measures and adopt unstable nuclear postures such as renouncing no-first use or even launch pre-emptive first strike affecting strategic stability.[34] It is because even advanced AGI cannot comprehend the meaning behind human political goals.[35] That begs the question as to whether a future AGI be able to set its own goals and, if it does, would they be compatible with human goals. This issue may be exacerbated when an AGI recursively self-improves itself or when superintelligence endowed with superhuman cognition is created.

Bostrom contends that an AGI that can think for itself would evolve into a super-intelligent AI which cannot be stopped and would adopt goals like ceaseless resource acquisition and self-preservation, leading to a malignant failure wherefore humans would face an existential crisis.[36] Fearing this, Robert Work, former U.S. Deputy Secretary of Defence, asserts that he cannot even imagine employing AI in weapons or strategy without “human in the loop”.[37] In Arkin's opinion, AGI's intelligence must be programmed, with a control switch, to allow an impact on strategy with the sole aim of maximizing the realization of human objectives.[38]

Conclusion

Gray asserted that the fundamental theory and function of strategy is an abiding and unchangeable human activity and the only dynamic and variable entity would be the technological context in which strategy is set.[39] Due to the current unlikelihood of programming human faculties like emotion and intuition, and self-aware consciousness into AI, the narrow AI won't fundamentally alter the nature of strategy, even if its capabilities influence human intelligence in strategic decision-making. For the foreseeable future, the foundation of strategy will essentially remain a human endeavour where humans can comprehend a broader context and adapt to novel situations, and ANI can be used for specific, tailored tasks and their advantages in speed. However, the intelligence explosion that may lead to self-thinking AGI might augur a human-machine civilization where strategy-making is shared, and that age might bear witness to AGI drastically changing the way strategy is schemed and implemented. But, for now, it remains a theoretical concept.

Nitin Menon is an engineer by education and an educator by passion with a keen interest in geopolitics and diplomacy. 

Bibliography

Books

Bellman, Richard. An Introduction to Artificial Intelligence: Can Computers Think? San Francisco: Boyd & Fraser Pub. Co, 1978.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press, 2014.

Freedman, Lawrence. Strategy: A History. New York: Oxford University Press, 2013.

Gray, Colin S. The Future of Strategy. Cambridge: Polity Press, 2015.

Gray, Colin S. The Strategy Bridge: Theory for Practice. Oxford: Oxford University Press, 2010.

Payne, Kenneth. Strategy, Evolution, and War: From Apes to Artificial Intelligence. Washington, DC: Georgetown University Press, 2018.

Scharre, Paul. Army of None: Autonomous Weapons and the Future of War. New York: W.W. Norton & Company, 2019.

Von Clausewitz, Carl. On War. Princeton, New Jersey: Oxford University Press, 1989.

Journal Articles

Ayoub, Kareem, and Kenneth Payne. ‘Strategy in the Age of Artificial Intelligence’. Journal of Strategic Studies 39, no. 5–6 (18 September 2016): 793–819. doi:10.1080/01402390.2015.1088838.

Goldfarb, Avi, and Jon R. Lindsay. ‘Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War’. International Security 46, no. 3 (25 February 2022): 7–50. doi:10.1162/isec_a_00425.

Horowitz, Michael C. ‘When Speed Kills: Lethal Autonomous Weapon Systems, Deterrence and Stability’. Journal of Strategic Studies 42, no. 6 (19 September 2019): 764–88. doi:10.1080/01402390.2019.1621174.

Johnson, James S. “Artificial Intelligence: A Threat to Strategic Stability.” Strategic Studies Quarterly 14, no. 1 (2020): 16–39. https://www.jstor.org/stable/26891882.

Sternberg, Robert J. ‘The Theory of Successful Intelligence’. Review of General Psychology 3, no. 4 (1 December 1999): 292–316. doi:10.1037/1089-2680.3.4.292.

Winter-Levy, Sam, and Jacob Trefethen. “Safety First: Entering the Age of Artificial Intelligence.” World Policy Journal 33, no. 1 (2016): 105–11. https://www.jstor.org/stable/26781386.

Yarger, Harry R. “Strategic Theory for the 21st Century: The Little Book on Big Strategy.” Strategic Studies Institute, US Army War College, 2006. http://www.jstor.org/stable/resrep12087.

Websites

‘A Generalist Agent’. Deepmind. Accessed 16 February 2023. https://www.deepmind.com/publications/a-generalist-agent.

Allison, Graham. ‘Is China Beating America to AI Supremacy?’. The National Interest. The Center for the National Interest, 22 December 2019. https://nationalinterest.org/feature/china-beating-america-ai-supremacy-106861.

John Allen and Amir Husain. ‘On Hyperwar’. U.S. Naval Institute, Accessed 21st February 2023. https://www.usni.org/magazines/proceedings/2017/july/hyperwar.

Koch, Christof. ‘How the Computer Beat the Go Master’. Scientific American. Accessed 21 February 2023. https://www.scientificamerican.com/article/how-the-computer-beat-the-go-master/.

Paul Scharre. ‘A Million Mistakes a Second’. Foreign Policy. Accessed 25 February 2023. https://foreignpolicy.com/2018/09/12/a-million-mistakes-a-second-future-of-war/.

‘The Pentagon’s New Algorithmic Warfare Cell Gets Its First Mission: Hunt ISIS’, Defence One. Accessed  21st February 2023. https://www.defenseone.com/technology/2017/05/pentagons-new-algorithmic-warfare-cell-gets-its-first-mission-hunt-isis/137833/.

‘The Rise of A.I. Fighter Pilots | The New Yorker’. Accessed 18 February 2023. https://www.newyorker.com/magazine/2022/01/24/the-rise-of-ai-fighter-pilots.


[1] Carl Von Clausewitz, On War (New Jersey: Princeton University Press,1989), 87.

[2] Robert Sternberg, ‘The Theory of Successful Intelligence’, Review of General Psychology, no.4(December 1999):189-202.

[3] Richard Bellman, An Introduction to Artificial Intelligence: Can Computers Think? (San Francisco: Boyd & Fraser Pub. Co,1978),12.

[4] Sam Winter-Levy and Jacob Trefethen, ‘Safety First: Entering the Age of Artificial Intelligence’, World Policy Journal, no.1(2016):105-111.

[5] Winter-Levy and Trefethen, ‘Safety First’, 105-111.

[6] "A Generalist Agent", Deepmind, accessed 16 February 2023, https://www.deepmind.com/publications/a-generalist-agent.

[7] Richard Betts, ‘Is Strategy an Illusion?', International Security, no.2(2000), 5.

[8] Colin S. Gray, The Future of Strategy (Cambridge: Polity Press,2015),10.

[9] Kenneth Payne, Strategy, Evolution, and War: From Apes to Artificial Intelligence (Washington: Georgetown University Press, 2018),181.

[10] James Johnson, ‘Artificial Intelligence: A Threat to Strategic Stability’, Strategic Studies Quarterly, no.1 (2020), 20.

[11] ‘The Rise of A.I. Fighter Pilots', The New Yorker, accessed 18 February 2023, https://www.newyorker.com/magazine/2022/01/24/the-rise-of-ai-fighter-pilots.

[12] Michael C. Horowitz, ‘When Speed Kills: Lethal Autonomous Weapon Systems, Deterrence and Stability’, Journal of Strategic Studies 42, no.6 (19 September 2019): 764–88.

[13] Lawrence Freedman, Strategy: A History (New York: Oxford University Press, 2013), xii.

[14] Christof Koch, ‘How the Computer Beat Go Master’, Scientific American, accessed 21 February 2023, https://www.scientificamerican.com/article/how-the-computer-beat-the-go-master/.

[15] Payne, Strategy, Evolution, and War, 175-176.

[16] Kareem Ayoub and Kenneth Payne, ‘Strategy in the Age of Artificial Intelligence’, Journal of Strategic Studies 39, no.5–6 (18 September 2016): 798.

[17] Ibid.

[18] Horowitz, ‘When Speed Kills’, 773-774.

[19] Ibid.

[20] Von Clausewitz, On War, 101.

[21]  Harry R. Yarger, Strategic Theory for the 21st Century (Strategic Studies Institute, US Army War College, 2006), 34-38.

[22] Avi Goldfarb and Jon R. Lindsay, ‘Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War’, International Security 46, no.3 (25 February 2022): 36-37.

[23] ‘The Pentagon’s New Algorithmic Warfare Cell Gets Its First Mission: Hunt ISIS’, Defense One, Accessed 21st February 2023, https://www.defenseone.com/technology/2017/05/pentagons-new-algorithmic-warfare-cell-gets-its-first-mission-hunt-isis/137833/.

[24] Von Clausewitz, On War, 134-135.

[25] Yarger, Strategic Theory for the 21st Century,40.

[26] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press,2017).

[27] Ayoub and Payne, ‘Strategy in the Age of Artificial Intelligence’, 797.

[28] Payne, Strategy, Evolution, and War,190.

[29] Graham Allison, ‘Is China Beating America to AI Supremacy?’, The National Interest, Accessed 22nd February 2023, https://nationalinterest.org/feature/china-beating-america-ai-supremacy-106861.

[30] Payne, Strategy, Evolution, and War, 208-210.

[31] John Allen and Amir Husain, ‘On Hyperwar’, U.S. Naval Institute, Accessed 21st February 2023, https://www.usni.org/magazines/proceedings/2017/july/hyperwar.

[32] Payne, Strategy, Evolution, and War, 200.

[33] Paul Scharre, ‘A Million Mistakes a Second’, Foreign Policy, Accessed 25 February 2023, https://foreignpolicy.com/2018/09/12/a-million-mistakes-a-second-future-of-war.

[34] Johnson, ‘Artificial Intelligence: A Threat to Strategic Stability’, 29-31.

[35] Payne, Strategy, Evolution, and War, 22-23.

[36] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press,2014),123-124.

[37] Paul Scharre, Army of None: Autonomous Weapons and the Future of War, (New York: W.W. Norton & Company, 2019), 281-282.

[38] Scharre, Army of None, 227-228.

[39] Colin S. Gray, The Strategy Bridge: Theory for Practice (Oxford: Oxford University Press,2010), 20.

Comments in Chronological order (0 total comments)

Report Abuse
Contact Us | About Us | Donate | Terms & Conditions X Facebook Get Alerts Get Published

All Rights Reserved. Copyright 2002 - 2024