상세 컨텐츠

본문 제목

Algorithms at War: The Promise, Peril, and Limits of Artificial Intelligence

Political Science

by 腦fficial Pragmatist 2023. 3. 15. 13:52

본문

반응형

BENJAMINM. JENSEN(Marine Corps University), CHRISTOPHER  WHYTE(Virginia Commonwealth University), SCOTTCUOMO(United States Marine Corps)

 

How might rapid advances in artificial intelligence (AI) technologies affect the construction and application of military power? Despite the emerging importance of AI systems in defense modernization initiatives, there has been little empirical or theoretical study from the perspective of the international relations (IR) and security studies fields. This article addresses this shortcoming by describing AI developments and assessing the manner in which AI is likely to affect military organizations. We focus specifically on military power, as new methods and modes thereof will alter the constitution of security relationships around the world and affect the ability of states to bargain, signal, and influence in the twenty-first century. We argue that, though rapid adoption of AI technologies stands to transform states’ ways of war on a number of fronts, an AI revolution brings with it new forms of risk that must be reconciled with the widespread integration of algorithmic systems across military functions. Where new technology promises a transformation of the character of military power in some veins, it also complicates the cognitive aspects of decision-making and bureaucratic interactions in security institutions. The speed with which complex integrated AI systems enable entirely new modes of war also stands to detach human agency in a potentially destabilizing fashion from the conduct of warfare on several fronts. Preventing the negative externalities of these “ghosts in the machine” will involve significant efforts to educate decision makers, promote accountability, and restrain irresponsible employment of AI.

 

Keywords: artificial intelligence, military innovation, military power

 

In a November 14, 2017, presentation before the United Nations Convention on Conventional Weapons, University of California-Berkeley Professor Stuart Russell released a seven-minute fictional video on the threat of autonomous weapon systems, entitled “Slaughter Bots” (Cussins 2017). In it, cheap and privately available lethal drones catalyze a global breakdown of societal order as governments and nonstate groups alike begin to anonymously kill off opponents, from foreign militaries to rebellious college students, en masse. The video went viral, echoing public pronouncements of the impending doom of weaponized artificial intelligence (AI) by such technology luminaries as Stephen Hawking, Elon Musk, Max Tegmark, and Steve Wozniak (Tegmark 2017, 32426). For Hawking, “success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks” (Hawking et al. 2014).

 

These thinkers see inevitable hubris in the rush by the defense departments of great powers like the United States to embrace a “Third Offset” built around AI and autonomous weapons designed to produce a capability gap sufficient to ensure military dominance and deter conventional conflict. And yet, the position held by practitioners across the globe is that such developments are inevitable. The recently released 2018 US National Defense Strategy (NDS) states that “rapid technological advancements” in areas like artificial intelligence are “changing the character of war” (Mattis 2018, 3). AI, in particular, seems poised to transform fields from defense and law enforcement to healthcare (Buchanan and Miller 2018). In our daily lives and economic transactions, AI will be as vital as electricity (Ng 2016; Horowitz 2018). Nations will wage “algorithmic warfare,” automating intelligence, surveillance, and reconnaissance tasks to win the fight for information before the first shot is fired (Allen and Chan 2017). The nation that best collects and makes sense of data through AI-applications will increase its military power and realize long-term competitive advantages. Russian President Vladimir Putin has gone so far as saying that “the one who becomes the leader in this sphere will be the ruler of the world” (Karpukhin 2017). Where transnational advocacy groups see ethical concerns, leading states see an opportunity to increase their military power.

 

Algorithms that learn, sense, and help machines move through the battlefield have the potential to increase military power. Yet, states will wage algorithmic warfare from inside the labyrinth of the modern defense bureaucracy, and the resulting plans and battles will still involve human judgment, even if indirectly as assumptions layered in lines of code. The institutional and cognitive aspects of national security decision-making will likely distort the expected benefits arising from faster operational tempo, more robust intelligence, and reduced risks, as fewer soldiers operate increasingly autonomous war machines. In the sections below, we explore some of the significant political, bureaucratic, and social limitations that will shape how different nations and defense organizations adopt AI, as well as the extent to which AI might alter military power.

 

Despite the emerging importance of AI in defense modernization initiatives and the fear it generates in transnational advocacy networks, there has been little theoretical or empirical study on AI from the perspective of international security or international relations (IR). This article seeks to address this gap by describing what “artificial intelligence” technologies actually look like and assessing how AI is likely to affect military organizations and power. Though there are innumerable potential ramifications of an AI revolution to be explored beyond the scope of military affairs, we argue that a critical first step is the triangulation of a framework with which security practitioners can think about AI-augmented military operations. If AI is as transformative as many experts claim, it could spark a new revolution in military affairs (RMA) (Adamsky 2010; Jensen 2018; Knox and Murray 2001). Increased military power will alter the constitution of state security relationships and shift the basis of national power to both bargain and influence in an increasingly multipolar world. Attempting to better understand how AI could alter military power is, thus, a key task for IR scholars.

 

The article proceeds as follows. First, we contextualize our effort in the small but budding body of work on AI among political scientists, noting in particular the significance of an emerging debate over the degree to which we might expect to see AI technologies proliferate in the future. Then, we define what AI is from a technical standpoint and discuss how it could alter military power at different levels of analysis. Rather than using international relations “levels” or “images,” we apply a practitioner framework and explore how AI could alter military power at the tactical, operational, and strategic levels of competition and warfighting (Luttwak 19801981; Waltz 1959; Singer 1961; Mattelar 2009; US Department of Defense [DoD] 2017). “Tactical” refers to individual battles and engagements. It “is the employment, ordered arrangement, and directed actions of forces in relation to each other” (DoD 2017, I-4). The operational, or campaign level of analysis, “links the tactical employment of forces to national strategic objectives” (DoD 2017, I-13). Strategy in this context refers to a “set of ideas on the ways to employ instruments of power” to achieve national interests (DoD 2017, I-13). As militaries’ field AI-enabled systems at each level, they create the potential to alter military power and build entirely new theories of victory for applying such power in the pursuit of national security objectives (Posen 1986; Snyder 1989; Rosen 1994; Jensen 2016).

 

Second, the article shifts from a technical discussion of AIand, in particular, its coercive potentialto explore the concept of military power and its limitations. Despite the promise of AI to solve a myriad number of tactical and operational challengesnot to mention the peril of lethal autonomous systems like the “slaughter bots”it is critical that security practitioners consider the intervening effect of human usage and institutions, not only in the United States but also within great power competitors such as China and Russia. New capabilities interact with existing defense organizations and human decision-making in unanticipated ways, changing the expected timeline for fielding AI-enabled systems and shaping their eventual effect on military power.

 

Perhaps most significantly, we argue that an AI revolution brings with it new forms of risk that must be reconciled with the widespread integration of algorithmic systems across military functions. Though much of what constitutes the basket of technologies we discuss herein under the “artificial intelligence” banner is not new, we sit at an inflection point wherein AI systemspowered by rapid scientific advances and poised to receive massive, comprehensive funding from governments around the worldwill be made to interact with both one another and human institutions in unprecedented fashion. Where new technology promises a transformation of the character of military power in some veins, it also complicates the cognitive aspects of decision-making and bureaucratic interactions in security institutions. The speed with which complex integrated AI systems enable entirely new modes of war also stands to detach human agency in a potentially destabilizing fashion from the conduct of warfare on several fronts. Preventing the negative externalities of these “ghosts in the machine” will involve significant efforts to educate decision makers, promote accountability, and restrain irresponsible employment of AI (Ryle 1949).

 

We then conclude by returning to the core question of proliferation and offer recommendations for future scholarship. Specifically, we suggest several discrete lines of inquiry along which social scientists can quickly work to build a foundation of knowledge that can be used to help practitioners avoid major obstacles or deviant outcomes in the adoption of AI. These include the theorization of future AI-enabled conflict given prevailing legal parameters for automated lethality and attribution, the conceptualization of a shifting deterrent landscape based on such considerations, and the use of experimental approaches to better understand how new dynamics of human-nonhuman intelligence interaction might affect the psychology of future warfighting.

 

What Scholars Say about Artificial Intelligence

 

This article is not the first to take up the challenge of problematizing and analyzing the coming impact of AI on international relations. Though there are only a few examples of such work at the time of writing this article, several authors have already taken steps to assess potential opportunities and risks bound up in advancing intelligent technologies on several fronts (Ayoub and Payne 2016; Roff 2017; Horowitz 2018; Payne 2018; Brundage et al. 2018; Horowitz et al. 2018). Work in this vein has tended to focus on critical questions along four lines. First, will AI fundamentally change either the character or nature of warfare? Second, will intelligent agents incorporated into military or societal processes affect the stability of international relations during crisis periods? Third, might AI, given effective international cooperation, reinforce peacekeeping mechanisms currently present in international affairs? And, fourth, can AI be harnessed, and can benefits be accrued safely, without serious risk of negative externalities that emerge from failures in development and adoption.

 

The first of these questions is arguably the most noteworthy insofar as there already exists disagreement on the degree to which AI technologies will present as militarily revolutionary. Against the backdrop of statements made by world leaders and prominent bureaucrats, some scholars point to the manner in which AI advances stand ready to potentially shift the cognitive bases of international conflict and warfighting as conjectural evidence that we are at the outset of a true revolution in military affairs (Krepinevich 1994; Cohen 1996; Adamsky 2010; O’Hanlon 2011; Jensen 2018). Such a shift, they argue, is unprecedented beyond even previous technological revolutions that altered the terrain of international conflict, like nuclear weapons, because it portends the advent of nonhuman intelligence at work in war. On the other side of the debate are those that note that AI technologies are likely to be applied toward replicating and enhancing existing capabilities employed by states in war and peace. Though this side of the debate remains in need of better unpacking, the arguments involved are familiar. In other settings, similar divisions have dominated research and commentary among scholars focused on issues of drone warfare (Stulberg 2007; Carpenter and Shaikhouni 2011; Moyar 2014; Boyle 2015; Horowitz, Kreps, and Fuhrmann 2016) and cyber conflict (Rid 2012; Gartzke 2013; Kello 2013; Valeriano and Maness 2015). Are these different iterations of new information technologies RMAs? Or are they simply old wine in new, if admittedly more sophisticated, bottles?

 

Though we do not explicitly aim to support one side or another of the RMA debate when it comes to AI, our effort here to address the impact thereof on military power clearly speaks to the question of proliferation. Perhaps the most relevant consideration for either side of the debate is the degree to which militaries and other security institutions will be able and incentivized to race toward AI-augmented capabilities. We argue that the key to finding analytically useful terrain upon which to assess the impact of AI on international relations involves bringing the RMA debate back down to earth by focusing on the human elements of artificial intelligence adoption. In doing so, scholars stand to both better define their positions and, more importantly, find better ground upon which to contextualize issues of AI safety.

 

What is Artificial Intelligence?

 

Though emerging work on AI in the social sciences variously outlines the shape of the thing, few pieces effectively categorize the basket of technological advances under discussion. Understanding how AI will shape twenty-first century military power requires a deeper understanding of its etymology and origins than is often imparted in studies focused at the level of the international system. As such, we offer a categorization of AI systems here that allows for a more nuanced consideration of implications for military power in more than just strategic terms. In describing these categories, we particularly highlight those technologies that most clearly promise to mimic traditionally human features of and roles in conflict.

 

The term “artificial intelligence” emerged at the intersection of computer science and cybernetics, the study of control and communication, in Cold War America (Kline 2011). In 1955, a group of researchers, including Claude Shannon and Marvin Minsky, approached the Rockefeller Foundation to fund a summer research effort at Dartmouth University exploring how “every aspect of learning or any other feature of intelligence [could] in principle be so precisely described that a machine [could] be made to simulate it” (Wiener 1948; Buchanan 2005; Moor 2006). In the proposal, the researchers defined the core of the artificial intelligence problem in terms of hardware and software challenges, as well as larger philosophical questions. The researchers asked whether it was possible to build “automatic computers” fast enough and accessing sufficient memory to “stimulate the higher functions of the human brain” (Kline 2011). The proposal called for exploring the feasibility of building “neuron nets” able to form concepts and understand as well as use human language much less enable machines to engage in self-improvement, abstract reasoning, or creative thinking (Kline 2011).

 

These questions emerged from a larger exploration of machine intelligence initiated by thinkers such as Nobert Wiener, Alan Turing, and John von Neumann. In the 1948 book, Cybernetics, Wiener proposed a computer to play games like chess (Wiener 1948). In 1950, Alan Turing published a paper in the journal Mind asked the question, “can machines think?” and proposed a test, the Turing Test, to answer the question (Turing 1950). In his 1958 book, The Computer and the Brain, Von Neumann described artificial intelligence as “an approach toward an understanding of the nervous system from the mathematician’s point of view” (Von Neumann 2012, 1). This search for biological antecedents to future machine intelligence formed the basis of many of the major breakthroughs in AI, including early research on image recognition and neural networks, two of the major areas of contemporary research.

 

The question of what it means to be an intelligent system sits at the core of AI research. For Nils J. Nilsson, “artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment” (Nilsson 2010, i). For Herbert Simon, AI is a unique domain of scientific inquiry organized around three goals(1) constructing “computer programs . . . capable of exhibiting intelligence,” (2) constructing subsets of these intelligence programs that replicate human tasks, and (3) constructing entirely new “expert systems” that could “supplement or complement human intelligence in performing some of the world’s work” (Simon 1995). In line with Nilsson and Simon’s definitions, most AI research focuses on creating intelligent systems to solve narrow problems without the added ability to adapt skillsets to tackle diverse tasks, such as the headlines about killer robots and Ray Kurzweil’s singularity claim that artificial superintelligence will replace humans in the near future (Kurzweil 1990/1999/2005; Vinge 1993). To date, many of the benchmark tests for AI conform to narrow categories as researchers develop software applications that can learn, recognize images and speech, and move autonomously.

 

Machines That Learn

 

Advances in technological and mathematical approaches to network modeling have the potential to help soldiers recognize and interpret, if not anticipate, battlefield changes. This is not an observation uniquely applicable to this discussion of AI; artificial network modeling methods, such as basic linear or logistic regression, have aided military planners and operators for decades. However, recent advances in big data and hardware optimized for neural networks and machine learning allow programmers to build whole new classes of software that learn from the environment around them (Haykin 2008; Mayer-Schonberger and Cukier 2013). Deep learning and referential and reinforcement learning programs involve training neural networks to “learn for themselves. . . . [through] trial-and-error, solely from rewards or punishments” (Lecun, Bengio, and Hinton 2015). Programmers develop software agents that “construct and learn their own knowledge directly from raw inputs, such as vision, without any hand-engineered features or domain heuristics” (LeCun et al. 2015). For example, Google’s DeepMind used deep and reinforcement learning to train artificial neural networks to play classic Atari computer games and to train itself to play the Chinese classic strategy game, Go (Mnih V. et al. 2015; Rusu et al. 2016).

 

Deep learning is a component element of the varied methods and technologies that undergird narrow AI. Deep learning involves the development of interpretive systems that can be trained to recognize complex patterns in data. As mentioned above, we focus on deep learning here over alternatives like supervised learning because of the clear link between such autonomous interpretation and the mimicry of human intelligence. “Reasoning” is often substituted for “learning” in deep learning because the underlying methods aim at generalizable abilities to infer, rather than at the ability to perform specific tasks. In other words, deep learning algorithms are designed to assess the pattern in data regardless of the nature of that data. Such algorithms learn and infer based on observed patterns in past behavior. If a new situation does not match previous situations, AI has trouble making sense of the data. This limitation presents a challenge at strategic levels where key security questions often pivot on elite intention under heightened tension and uncertainty.

 

During the 1962 Cuban Missile Crisis, for example, the central question for President Kennedy wasn’t whether or not there were nuclear-capable missiles on the island. Rather, the question was how far the Soviet Union was willing to go to protect the launch sites under the threat of nuclear exchange (Allison 1969; Allison 1971; Allyn, Blight, and Welch 1989; Kroenig 2018, 8494). Resolvenot estimates of the number of missiles, doctrinal processes for firing them, and probable flight times and targetsis what decision makers needed to understand.

 

At the tactical level, a common narrative among military professionals and security experts is that deep learning has the potential to create combat-advising software agents that anticipate both the natural and human environment, offering predictions about enemy actionssuch as likely future attacksindependent of military personnel (Jensen, Cuomo, and White 2018). Beyond the battlefield, experts agree that deep reasoning is likely to be a boon for complex, dispersed security establishments. Just as Amazon anticipates consumer purchases and optimizes its marketing and logistics around analyzing your preferences, software agents could anticipate the supply needs of entire joint task forces at the operational level (Linden, Smith, and York 2003). For example, a US Arleigh Burkeclass Aegis destroyer carries fifty-six tomahawk cruise missiles, out of a global inventory of between three and four thousand, each costing approximately $1.4 million dollars (DoD 2017). The ships cannot reload at sea and must steam to a port, often days away, where it can take one to two days to reload the missiles. Applying practices like anticipatory shipping, which delivers products to customers before they place orders, could ensure shorter reload times and more responsive forces in a crisis, such as the recent use of tomahawks against a Syrian airfield in 2017 (Lee 2017).

 

Beyond operational logistics and supply chain considerations, deep learning has the potential to revolutionize military readiness and force posture. Similar to the tomahawk missile example, software agents could anticipate changing readiness levels across the ground, air, and naval services based on operational tempo, weather, and even the unintended effects of crew rotations, promotions, and emerging morale and discipline issues caused by sexual harassment allegations.

 

Furthermore, via scaling technology already demonstrated in smart buildings and intelligent manufacturing, a software agent could make recommendations to alter personnel rotations, training prioritization, and other resources to increase readiness levels (Manic et al. 2016; Wang and Jiang 2016). That is, using historical data and simulations, the software would predict likely changes to readiness that would help the secretary of the defense and the chairman of the Joint Chiefs of Staff weigh military options offered to the president and help determine the balance between spending on readiness and modernization. The software would guide staff officers seeking to fix anticipated future readiness gaps before they emerged, thus ensuring a higher number of combat formations available to the national command authority.

 

In addition to strategic readiness, AI software agents can be used to simulate major defense scenarios, providing military personnel a chance to hone operational judgment and tactical acumen. China’s military is aggressively employing this approach. Given the nation’s lack of recent combat experience, Chinese AI experts and military scholars have leveraged advanced computer war-gaming and simulation methods to prepare the country’s leaders for potential future war scenarios (Kania 2017). As just one example, in 2017, China’s Institute of Command and Control held the nation’s first ever “Artificial Intelligence and War-Gaming National Finals.” This event featured a “human-machine confrontation” between China’s top thinkers and an AI system known as CASIA-Prophet (先知) 1.0. CASIA-Prophet defeated the human teams by a 7 to 1.222 margin. Simultaneous with enabling training and education, Chinese military leaders believe these games will allow their personnel to gain a greater appreciation of the trends in warfare (Kania 2017).

 

Machines That Sense

 

AI advances have the potential to perform a wide range of intelligence tasks faster and with higher accuracy than human analysts, thus producing information advantages that alter military power. A key aid to the potential of deep learning processes are those systems that replicate human vision and enable complex simulation of human language for analytic purposes. Deep learning entails the recognition of unstructured information in the world around humans, including in the form of images, sounds, and other signals ( Krizhevsky, Sutskever, and Hinton 2012). A famous recent example of this was Google Brain, a network of one thousand computers that scanned the internet, only to determine that, in fact, the digital domain is flooded with cat videos (Jones 2014). Regardless of the seeming frivolity of the finding, the project reflects an emergent set of technologies that often use neural networks for classification schemes and are now enabling software applications that can scan images and answer basic questions asked by humans (such as what object is the largest and bluest in a photo) (Ma, Zhengdong, and Li 2016). Similarly, natural language processing (NLP), a branch of AI that explores human language and human-machine dialog, has advanced over the past two decades from the ability to recognize individual words to capacity for semantic search queries that understand latent meaning (Cambria and White 2014). For example, software agents can use semantic role-labeling, a process for analyzing predicates and verbs in sentences to identify agent-specific roles and attributes, to determine customer opinions about an entire company or specific product line (Li et al. 2014). In addition, NLP helps produce content. Microsoft developed an AI content-generation program that wrote news articles and Chinese poetry (Microsoft 2019).

 

Recognizing images and understanding language and meaning have significant implications for security practitioners and establishments of all stripes. For instance, these algorithms could make information-warfare campaigns and influence operations much more precise. The same software agent at the tactical level that could help a US Marine battalion operating in Afghanistan could mine population sentiment from recorded meetings with key leaders, radio broadcasts, and social media geotagged to the area. There are multiple efforts underway in the US defense and intelligence community to scrape social media data to understand local populations (McGrath 2016). At the operational level, software platforms can automate public affairs releases and propaganda, tailored to key demographics. These software agents could even change the font and images used based on recognizing prevailing trends and omit unpopular website links based on a recursive scan and analysis of external content.

 

Above the level of military operations, the mining of meaning from massive amounts of structured and unstructured data can increase the effectiveness of political warfare (Jensen 2018). In 2016, Russian cyber operatives attempted to interfere with the US election using relatively low-tech methods to gain access to political parties’ data and to access state-level voting systems (Valeriano, Jensen, and Maness 2017, 2018). The next such effort by Russian operatives could use social media bots to disseminate AI-generated content based on analyzing user sentiment and preferences. Just as Amazon predicts what you are likely to buy, foreign operatives seeking to undermine democratic institutions could predict inflammatory articles you are more likely to spread to your social network and use software agents to write tailored news stories. The crude approach taken by Russian troll houses during the 2016 US presidential election can be automated and tailored to increase the circulation of propaganda masquerading as news. There are plenty of reasons to expect China to have similar, if not better capabilities than Russia. Beyond the nation’s rapidly advancing AI capabilities and stated goal to be the “premier global AI innovation center” by 2030 (Kania 2017), the Chinese Communist Party has an enduring commitment to “influence operations and active measures as a normal way of doing business” (Mattis 2018). The key takeaway is that the 2016 US presidential election should not be thought of as an anomaly. Modern political warfare will increasing rely on AI software that helps data-mine user preferences and sentiment.

 

As the replication and automation of sensory perception underwrite the potential of deep-reasoning technologies, the generation and dissemination of such informationcombined with innumerable other types of datafor use by an increasing number of sophisticated platforms undergirds the global adoption and integration of AI systems. Specifically, the combination of deep learning, image recognition, and semantic labeling both benefit from and enable “Big Data” repositories that can be utilized at immense scale for intelligence at the tactical, operational, and strategic levels. Consider recent investments by In-Q-Tel, a venture capital arm of the US Central Intelligence Agency (Forbes 2010). Among firms to receive funding, Eortius One and Geosemble create geospatial products, maps, and other data visualization tools that integrate social media feeds, using NLP to identify patterns in real-time (Recorded Future 2019). Recorded Future uses deep learning and social media sites, again applying NLP techniques, to make crowdsourced predictions and identify invisible connections between actors. Big data, processed through software applications that identify clusters of meaning and patterns of activity, has the potential to make concealing activity difficult, a phenomenon on display in the use of social media, specifically photos loaded by Russian soldiers to Vkontatke, a social media site, to locate the Russian Buk Anti-Aircraft missile system that shot down Malaysia Airlines Flight 17 (Bellingcat 2014). Furthermore, the Chinese government is deploying a software package called “Xiu Liang (Sharp Eyes)” to identify criminals in housing complexes and public areas through a combination of face recognition and CCTV footage (Denyar 2018). In short, AI could increase a nation’s military power through decreasing the amount of resources and time required to conduct intelligence, surveillance, and reconnaissance and to wage twenty-first century political warfare.

 

Machines That Move

 

AI advances have the potential to power autonomous systems, like the feared “slaughter bots,” that increase military power, as states deploy low-cost, precision strike platforms that overwhelm adversary defenses. Robotics and autonomous systems are the AI research areas that receive the most attention from the national security community. In a 2016 report, the US Defense Science Board differentiated between two forms of autonomy: autonomy in motion and autonomy at rest (DoD 2016). Deep learning, image recognition, expert systems, and NLP represent autonomy at rest. Autonomy in motion refers to creating systems able to move in the physical world. Autonomy in motion, like autonomy at rest, relies on big data and deep-learning-type processing to produce accurate representations of the world in real time that a software application can learn from.

 

Consider driver-less cars, also referred to as intelligent vehicles by researchers (Broggi et al. 2018). Like military drones and unmanned systems, industry has evolved from thinking about a vehicle without a person to smart systems built around a vehicle’s data collection and integration into a larger “internet of vehicles” and “vehicular cloud” that uses sensors to capture information about the environment. According to one study, “the car is now a formidable sensor platform, absorbing information from the environment (and from other cars) and feeding it to drivers and infrastructure to assist in safe navigation, pollution control and traffic management” (Gerla et al. 2014). In short, no understanding of artificial intelligence is complete without understanding the way in which the prementioned technologies can be enabled, can be augmented, and can physically manifest in real-kinetic terms.

 

Modern military applications of autonomy follow this trajectory, looking for unique forms of “human-machine collaboration” and “combat teaming” between manned and unmanned systems that enable a US battle network to sense, think/decide, act, and team faster than an enemy system (Defense Science Board 2016, 1117). At the tactical level, emerging programs seek to reduce the risk to military personal, reduce cognitive load, and increase the number of tasks a human team can simultaneously manage. That is, AI increases military power by increasing the number of tasks any one platform can accomplish. For example, the US Air Force Loyal Wingman/Avatar program turns fourth-generation fighter aircraft (i.e., F-15, F-16, etc.) into unmanned platforms connected to manned fifth-generation aircraft (i.e., F-22, F-35) (Mizokami 2017). The unmanned system can autonomously perform a range of assigned duties, while the more survivable, stealthy asset controls the mission. As part of its Aviation Restructuring Initiative, the US Army replaced manned Kiowa scout helicopters with unmanned gray eagle drones connected to manned Apache attack helicopters, creating a mannedunmanned hunter-killer team for ground combat (Vergun 2014).

 

At the operational level, these manned-unmanned teams enable new approaches to core missions like air interdiction, amphibious assault, and long-range strike. For instance, China’s Military Museum now has an exhibit showing a “UAV swarm combat system with swarms used for reconnaissance, jamming, and ‘swarm assault’ targeting an aircraft carrier” (Kania 2017). While it is unknown if China’s military has this “swarm assault” capability today (Lin and Singer 2018), in December 2017, the Chinese company Ehang employed an 1,180-drone swarm above the city of Guangzhou at the conclusion of the Global Fortune Forum. The drones all coordinated their movements autonomously, maintaining “a flight deviancy of a mere two centimeters horizontally and one centimeter vertically” (Lin and Singer 2018). Russia is pursuing similar autonomous aerial vehicle capabilities, while also pursuing new nuclear delivery platforms. For example, Russia’s “Ocean Multipurpose System Status-6” is an autonomous underwater vehicle that allegedly provides a nuclear weapons capability to strike US ports (Insinna 2018).

 

AI and Military Power

 

For all its promise, the central question is how AI will collide with human institutions and decision-making to affect military power. Enthusiasts stress that AI is a disruptive innovation (Pierce 2004; Christensen, Raynor, and McDonald 2015). Machines that learn, sense, and move have “the potential to be a transformative national security technology, on a par with nuclear weapons, aircraft, computers, and biotech” (Allen and Chan 2017). At the organizational level, AI, as a “cognitive technology” will “enable organizations to break prevailing trade-offs between speed, cost, and quality,” increasing efficiencies and output (Schatsky, Muraskin, and Gurumurty 2015). Furthermore, the disruption will diffuse rapidly. Cold Warera defense labs and major defense manufacturers’ investments in AI are “far outmatched by that of the commercial automotive or information and communication sectors [and] less appealing to the most able personnel” (Cummings 2017). At the strategic level, the introduction of AI could allow rising powers, as new market entrants, to displace established military powers like the United States (Allen 2018). According to Alphabet Inc. Executive Chairman Eric Schmidt, China will catch up if not surpass the United States as the center of AI innovation in the next five years (Stewart 2017). These accounts all assume that integrating systems that can learn, sense, and move will increase military power.

 

Military power is “a measure of how states use organized violence on the battlefield or to coerce enemies” that involves both “hardware” and “software” considerations (Eliason and Goldman 2003; Horowitz 2010; Art 1980). Hardware is the “combination of technology used to fight.” Software is “the organizational processes used to actually employ the hardware” (Horowitz 2010). For example, it is not enough to invent new technology, such as the aircraft carrier. To achieve a position of advantage required developing organizational processesfrom doctrine to training to recruiting humansthat integrated aircraft carriers, the hardware, into a broader system of naval warfare (Rosen 1994; Till 1996). Stephen Biddle echoes this notion in his study on military power in arguing that modern softwaremeaning the systematic development of tactics that use “cover, concealment, dispersion, small-unit independent maneuver, suppression and combined arms integration”produced military power and, through it, determined the outcomes of interstate war (Biddle 2006). Similarly, John Mearsheimer argued that the relationship between military power and “conventional deterrence is largely a function of strategy” (Mearsheimer 1983, 7). How AI is integrated across a military organizationfrom recruiting to training and education to concepts and doctrinewill likely be as important as the technology.

 

Indeed, the promise of transformation often collides with the constant feature of politicshuman beings and our institutions. As a result, evolutionary change is more common than sudden revolutionary change as people integrate new technology into their affairs. Studies on military innovation often show that new technologies comingle with social and institutional factors, slowing the rate at which they move from the lab to the battlefront or political fault line (Murray and Millett 1998). A range of organizational, financial, and cultural factors will shape how new systems like AI translate into military capability and diffuse in a competitive international system (Adamsky 2010; Horowitz 2010). Even when new capabilities do alter the intrinsic military balance between nations, intelligence estimates are still often uncertain (Mahken 2003; Yarhi-Milo 2014). In the end, major political events, like the outbreak of war, have historically been better explained by politics than by changes in the military balance brought by new technology (Lieber 2005). Despite AI’s promise and peril, there are two likely vectors of limitation that will alter how, or even if, it produces disruptive change in the character of military power: integration and calculation.

 

Limits to Integration

 

While AI-enabled autonomous systems have the potential to alter military power, national militaries will invariably face legal, organizational, cultural, and technical challenges in effectively incorporating them. In particular, militaries will be constrained in at least four unique ways. First, militaries and military adoption of AI will be limited by legal standards for the operation of autonomous systems and by evolving standards for big datapowered intelligence functions. Second, the integrity of such legal barriers will themselves be brought into question by evolving strategic dynamics that will see some adversaries seek to establish new norms of accepted behavior in ill-defined spaces. Third, AI adoption will inevitably be shaped by institutional and cultural preferences within state militaries, as well as by, fourth, the positive and negative outcomes of ad hoc innovations in the use of AI across military subunits (Kollars 2014). These determinants of integration will not only affect how AI is systematically adopted but will also shape the viability of different technologies across the categories outlined in the sections above.

 

One example of the legal challenges to be faced lies in US Department of Defense (DoD) Directive 3000.09, Autonomy in Weapons Systems, a 2017 regulation that prohibits the development of lethal autonomous systems (DoD 2017). DoD 3000.09 specifically states that “[a]utonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” That is, the United States has already limited autonomy on legal, if not ethical, grounds, causing institutional restraints on increasing autonomy and hence on speed on the battlefield.

 

Alternatively, Russia is moving to use strategic investments in AI to enable increased autonomy as a means of deterring the United States. The Russian Military Industrial Committee has modernization targets in place to make more than 30 percent of combat formations autonomous by 2025 (Simonite 2017). In December 2017, Russian President Vladmir Putin told a gathering of Russian generals, “Russia must be among the leadersand in some areas the absolute leaderin creating an army of the new generation, an army of a new technological paradigm. This is an issue of highest priority for ensuring our sovereignty, peace and safety of our citizens, a stable development of the country, pursuing an open and independent foreign policy that is based on the interests of our country” (Zarembo 2017). Reports from Russian state media tout advances in AI, including “a system to help pilots fly fighter planes, a project by St. Petersburg-based Kronstadt Group to equip drones with artificial intelligence, a similar effort for missiles by the Tactical Missiles Corporation, and a Kalashnikov combat module using neural networks.” According to former commander of Russia’s Aerospace Force and Chairman of the Federation Council Defense and Security Committee Viktor Bondarev, “the day is nearing when vehicles will get artificial intelligence. So, why not entrust aviation or air defense to them?” (Shtokal 2017). Beyond these investments, Russian officials have also declared that the United Nations’ efforts to limit autonomous weapons systems are premature and unnecessary, indicating a strategy to use legal restraints and international institutional prohibitions against the United States in an AI-arms race (Tucker 2017).

 

Organizationally, the potential of AI systems must inevitably vie with the power of entrenched preferences in defense establishments around the world, something that is equally present in the militaries of authoritarian and democratic states. For example, the internal combustion engine did not change military organizations overnight. A combination of a conservative officer corps and organizations built around hierarchy, institutional incentives, and traditions led to the survival of horse cavalry well into the twentieth century (Katzenbach 1973). Advocates of horse cavalry in the US Army went as far as declaring in August 1939just one month before mechanized German forces invadedthat Polish horse cavalry significantly increased military power and even enabled mounted infantry to “stop tanks by firing at a range of 40 to 50 meters against the chinks in the armor” (Johnson 1998, 138). Parochial interests in the military profession alter not just the adoption of new capabilities but how they factor into war planning (Snyder 1984). Bureaucracy alters the pathway through which any innovation produces military power (Armacost 1969; Wilson 1989; Brown 1992; Halperin and Clapp 2007). For example, “knowledge-laden organizational routines” about blast damage altered how many nuclear devices planners assumed were required to destroy critical targets during the Cold War, leading, in part, to overkill and nuclear proliferation (Eden 2004, 3). Military bureaucracy, whether in the form of entrenched interests or organizational routines, will alter how AI-enabled systems are integrated into the armed forces and alter military power.

 

AI will also interact with prevailing cultural norms and strategic culture and, in the process, produce unanticipated limitations. Military service culture, as veritable a personality type, will alter the types of AI programs bureaucrats pursue as part of their defense modernization strategy. According to Carl Builder, “institutions, while composed of many, ever-changing individuals, have distinct and enduring personalities of their own that govern their behavior” (Builder 1989). A combination of “civilian policy makers. . . . beliefs and military cultural norms guide . . . decisions about the organizational form of military” (Kier 1995; Kier 1997). Ethos, or unspoken behavioral norms, can shape how military organizations learn to integrate new capabilities. During World War I, “the British army used the institutional ethos of its members to interpret the nature of war, identify problems, pose solutions, and implement change” (Kier 1995; Kier 1997). Prevailing assumptions about the use of force and role of military power, as strategic culture, can also bound the ways in which any innovation becomes the hardware and software for military power (Johnston 1995; Adamsky 2010). With AI, these assumptions will likely be layered in code and the ontologies neural networks use to sort information and make inferences about threats and possible courses of action. The more complex machines that learn and sense become, the more layers they require and, through them, the more ghosts of old assumptions may haunt future logic.

 

More obvious organizational and cultural limitations are already apparent in how the US military adopts AI-enabled systems into its combat formations. For instance, both the US Navy and Marine Corps aviation components have continually resisted embracing even a single armed semi-autonomous unmanned aircraft in their inventory, despite the Air Force having such aircraft going on seventeen years now (Brewster 2017; Cuomo 2017; Hendrix 2017; Marron 2017). In the Marine Corps case, this resistance is also despite the service’s ground element requesting the capabilities in urgent combat needs statements dating back to 2004 (Brewster 2017; Cuomo 2017). The level of resistance ultimately reached the point where, in early 2018, the Department of the Navy’s assistant secretary for research, development, and acquisition issued a memorandum to both Navy and Marine Corps leadership stating that the services have a “strategic imperative to exploit emergent and rapidly developing unmanned and autonomous technologies, while building a solid infrastructure upon which our future forces will be based, to ensure our continued warfighting superiority” (Department of the Navy 2018).

 

Service resistance has directly impacted the organizational structure of the DoD as well. Just before the turn of the twenty-first century, visionary and determined Air Force leaders, such as Generals’ Ronald Fogleman and John Jumper, helped their service maneuver around and battle through institutional inertia to the point that today the nation’s air service has more unmanned than manned aircraft pilots (Whittle 2014; Sachs 2017). Air Force leaders now also explain publicly that their remotely piloted aircraft units have had and continue to have decisive impacts on US combat operations. For example, in 2016, Air Force MQ-1 “Predators” and MQ-9 “Reapers” provided joint force commanders 351,000 hours of dedicated support, while conducting more than three thousand strikes against ISIS (Pomerleau 2017). As a point of comparison, of the 185,000 active-duty personnel in the Marine Corps today, less than a thousand currently have jobs focused specifically on unarmed unmanned systems employment, much less making rapid advances in artificial intelligence (Radcliffe 2016).

 

Organizational and cultural considerations inform how thinkers in the People’s Liberation Army (PLA) envision integrating AI into their combat formations. Whereas the current US framework focuses on “centaurs,” unique mannedunmanned teaming and human-machine collaboration combinations that keep humans in the loop, the PLA favors a higher degree of autonomy (Kania 2017). In the PLA, “technology determines tactics” but not strategy (Blasko 2011). Of particular interest to Chinese leaders appear to be technologies that allow them to close the technological gap with the United States and achieve new forms of conventional deterrence. For example, Chinese strategic theory promotes the idea of an “information umbrella” that produces deterrence through information superiority (Newmyer 2010). AI enables information superiority through automated cyber offense and defense systems as well as new approaches to predictive analytics. Leading PLA thinkers see a “trend toward future ‘informatized intelligent warfare’ (信息化智能報戰) necessitat[ing] the intelligentization of equipment and integration of AI into command and control, especially for information operations forces” (Kania 2017, 27; Taodong 2012, 211). In addition, the Chinese see immediate benefits to autonomous surveillance and strike platforms that offset US military capabilities. According to Chinese defense officials, “our future cruise missiles will have a very high level of AI and automation” (Kania 2017, 26). Institutional comfort with centralized control and a strategic culture that puts a premium on information superiority shape how Chinese military leaders integrated AI-enabled systems.

 

In addition to organizational and cultural challenges, technical barriers remain as well, particularly for military artificial intelligence applications. Problems in the commercial sector can often be decomposed such that autonomous systems tackle the easier parts first, while humans supervise or directly interject to the solve those more difficult (Defense Science Board 2016). Amazon’s use of this approach makes millions of people’s lives easier when buying books, clothes, and countless other items (Camhi and Pandolph 2017). However, for the US military, committing to such an approach in the near-term remains problematic given ongoing and anticipated communications shortfalls in potential spectrum-contested operating environments (Defense Science Board 2016). Imagine what would happen if the US military changed its force structure to model Amazon and then had to operate in a satellite-degraded environment or in one where a peer adversary cut undersea fiber-optic telecommunications cables coming into and out of the continental United States (Chuter 2017).

 

American adversaries certainly have a vote when it comes to whether the US military can effectively employ artificial intelligence-enabled, autonomous systems; many such challenges to AI use from foreign sources, which admittedly take a more explicitly technical form than limitations discussed to this point, will nevertheless inevitably clash with institutional reorganizations designed around the efficiency of an AI-enhanced force posture. Similar to possible communications spectrum challenges standing in the way of adopting an Amazon-like logistics model, for instance, the same applies for emerging drone swarm concepts. Such concepts must be proven to work in persistent radio frequency jammed or denied environments (Defense Science Board 2016, 87). Additionally, such concepts must be able to withstand cyber threats. The US military has already experienced the impact of such threats. In 2011, for example, pilots in unmanned aircraft cockpits located in Creech Air Force Base, flying MQ-1 and MQ-9 aircraft in the Middle East, became infected with difficult to remove malware (Defense Science Board 2016, 92). While cyber defense network experts were ultimately able to remove the malware, this example serves as a cautionary tale highlighting that “as the degree of autonomy increases in U.S. platforms, the cyber-vulnerability of subsystems will have increasing impact” (Defense Science Board 2016, 92). And while it is certainly possible to defend against such cyber-vulnerabilities, the US military continues to have difficulties recruiting and retaining its cyber workforce (Corbin 2016). These difficulties are often due to entrenched views on dated military manpower models that do not work effectively when applied to highly technical skill sets, such as those performed by the cyber workforce. Given 3000.09, Autonomy in Weapons Systems guidance, as well as cultural norms across many US military services today, attempts to incorporate more AI-enabled systems in these organizations will encounter these same manpower model obstacles, particularly when the demand increases for humans highly skilled in fields such as computer science, mathematics, engineering, and physics.

 

Limits to Calculation

 

Even if states overcome integration challenges as they field AI systems to increase their military power, human decision-making will still cast a shadow on machine decision-making. In addition to legal, organization, cultural, and technical challenges to AI adoption, there are important human elements in decision-making that affect how leading defense organizations modernize their combat formations. In theory, AI should lower the costs and increase the expected benefits of military operations. This in turn should lower the risk associated with military action and produce tangible coercive benefits (Pape 1996, 16). The state with the AI-enabled drone swarm that operates faster than the defense can respond could hold its rivals hostage during crises as adversary decision makers fear destabilizing first strikes that limit their ability to resist over time. For example, the fear of a “technological surprise attack” animates PLA leaders and Chinese leaders drive for military innovation with a particular emphasis on AI (Kania 2017, 12). New capabilities alter the expected costs and benefits of military action. AI, by making “military and intelligence activities that currently require the efforts of many people achievable with fewer people or without people” reduces the costs of military action while increasing the expected benefits through predictive analytics and autonomous precision targeting (DoD 2016; Allen and Chan, 2017). Beyond the tactical level, these shifting costs have the potential to alter the “risk-return trade-off” central to bargaining theory (Powell 2002). The first nation with full AI-integration thus has the potential to gain a position of competitive advantage.

 

Yet, humans will play a role in these calculations of costs and benefits, altering the hardware and software of military power. While the tactical and operational benefits of AI are clear, the strategic effects are uncertain. First, estimates of military power and strategic intention are prone to error. In a 1966 study for RAND, Andrew Marshall noted that military power is rarely the transitive relationship it is made out to be: force-ratio calculations are and comparisons between the military forces of one country with another (Marshall 1966). Most analysis tends to strip military power from its institutional and political context and avoid dealing with the contingency in which it will be used and, with it, associated questions about the effects of geography, logistics, and training.

 

Furthermore, according to Marshall, “most discussions and forms of analysis tend to treat governments, military organizations, etc., as though they were equivalent to individual rational decision makers and not the complicated bureaucratic institutions that they in fact are” (Marshall 1996, 6). To simplify the complexity of comparing military forces and estimating military power, analysts reduce bureaucratic noise, preferring to count tanks and bombers rather than consider their logistical footprint, utility in different defense planning scenarios, and geographic limitations, not to mention the politics involved in employing any military force. Assumptions about military power will drive machines that learn and sense to make many of the same faulty inferences Marshall highlighted in 1966. While AI can process more diverse data streams faster than humans, the algorithm will still rely on assumptions that simplify complexity, such as counting tanks as opposed to counting tanks recently repaired, operated by a trained crew, and optimal for a contingency in the open desert.

 

In addition to these errors of reduction, there are unique forms of algorithmic bias that cascade in complex systems and produce unintended outcomes. These biases can be the result of sensing errors in identifying text and images or more perverse learning errors arising from feedback loops in the environment. Perhaps the most disturbing example of a sensing error occurred three years ago when Google’s automated image recognition system in a photo application misidentified African Americans, erroneously creating an album titled “Gorillas” (Schupak 2015). When machines that sense and use data from interactions to learn interact with the real world, they start to reflect its real pathologies. It took only a few Twitter bots less than a day to turn Microsoft’s Tay, an AI-chat agent designed to act like a young American woman, into a racist (Alba 2016).

 

While Google and Microsoft quickly recognized their artificial intelligence failures and apologized, these unsettling examples highlight the hurdles that must be overcome for the US military to be able to fully embrace facial recognition software. Consider what would happen if military intelligence professionals entered into a similarly flawed image recognition system hundreds of pictures of adversary fighters assessed to be located in an urban area filled with hundreds of thousands of noncombatants. Then, this system autonomously fed directly into loitering and armed unmanned aircraft programmed to target the adversary fighters at first sight. The resulting tragedy would both adversely impact the immediate tactical mission and most likely come with significant political loss, domestically and internationally.

 

Put another way, “as the tech industry begins to create artificial intelligence, it risks inserting racism and other prejudices into code that will make decisions for years to come. And as deep learning means that code, not humans, will write code, there’s an even greater need to root out algorithmic bias” (Garcia 2017). In an effort to limit these effects, Facebook’s AI-personal assistant M keeps a human on the loop to monitor the system (Metz 2017). Either through sensing and learning bias or human overseers, bias will remain part of national security decision-making. Algorithmic warfare will likely still exhibit a tendency toward fundamental attribution error as either implicit human bias or prior coding leads learning machines to make inferences about rapidly changing cyber intrusions, enemy autonomous swarms, and bots distributing propaganda in social media during a political crisis (Jervis 2017).

 

These algorithms, which will reflect path-dependent assumptions about military power, also have the potential to produce dissonance as decision makers are more likely to base their calculations off vivid examples in the moment and subjective credibility (Yarhi-Milo 2014, 3). For example, at the strategic level, Keren YarhiMilo demonstrates how leaders, such as US President James Carter in 1979, were more affected by personal encounters with Soviet leaders than intelligence community estimates of military capabilities (Yarhi-Milo 2014, 11415). Historical analogies tend to shape foreign policy decision-making and military strategy. For example, US decision makers used analogies to Korea in making decisions about military intervention in Vietnam in 1965 (Khong 1992). Just because a deep learning algorithm makes an observation about a series of strategic events in a deliberate, calculated manner, it does not mean that leaders will be able to overcome their own subjective beliefs and experiences.

 

Beyond bias and decision heuristics, there is a deeper question about the level of certainty possible in complex systems like warfare. At the operational and tactical level, Prussian military theorist Carl von Clausewitz captured the inherent uncertainty at play in war and how it shapes human judgment. For Clausewitz, “war is the realm of chance. . . . since all information and assumptions are open to doubt, and with chance at work everywhere, the commander continually finds that things are not as he[she] expected” (Howard, Paret, and West 1984, 101). As a nonlinear system, every battle and campaign is contingent and subject to emergent properties (Beyerchen 2007, 4556). This uncertainty can be paralyzing. According to Clausewitz, “with its mass of vivid impressions and the doubts which characterize all information and opinion, there is not activity like war to rob men of confidence in themselves and in others, and to divert them from their original course of action” (Howard et al. 1984, 108). As a result, great commanders tend to rely on “coup d’oeil,” an “inward eye,” to recognize change in the environment sufficient to act (Howard et al. 1984, 102).

 

Second, the enemy gets a vote, producing a complexity unique to war. Every change to military capabilitiesthe hardwareand their battlefield employment through new concepts and organizationsthe softwareis subject to a corresponding reaction. Military innovation, along with battlefield setbacks, leads to tactical adaption (Farrell 2010; Russell 2010; Finkel and Tlamim, 2011; Serena 2011; Foley 2012; Lambeth 2012; Farrell, Osinga, and Russell 2013; Serena 2014). For British General J. F. C. Fuller, this was the “constant tactical factor” (Fuller 1932, 26667). In military theory, this feedback loop complicates planning and execution. For Helmuth von Moltke the Elder, “no plan of operations extends with certainty beyond the first encounter with the enemy’s main strength” (Hughes 1993, 45). For Clausewitz, war is neither exclusively art nor science but a “clash of interests” directed at an “animate object” (Howard et al. 1984). The introduction of AI will not change the constant tactical factor. There will be host of cheap, expedient adaptions to hunterkiller swarms, such as experiments by the French military and Dutch police using eagles to counter aerial drones (Roberts 2017). The extent to which AI changes military power will depend on the distribution and speed of adaption, both functions of human creativity and judgment under uncertainty.

 

To reduce uncertainty, humansregardless of the machines they employ that learn, sense, and movewill apply decision heuristics associated with judgement in time compressed and uncertain environments. With AI, the challenge will be threefold. First, decision makers must determine how they will go about judging the utility and accuracy of AI-enabled sensory output and analysis. How can AI information products enhance human decision-making procedures? Moreover, to what degree should such products be trustedto reflect appropriate assessment of situational and strategic context, both spatially and temporallyto directly inform human decisions? Second, and relatedly, to what degree should AI-enabled decision-making replace or complement human decision-making? And finally, what is the condition of all of the above among the forces of potential opponents in conflict? These concerns speak to perennial problem in international security of misperception as a multifaceted challenge to be overcome by all decision makers. Added complexity from AI-augmented systems is likely to manifest as uncertainty about the scope of available information, the significance of information, and the meaning of information (Levy 1997; Goldgeier and Tetlock 2001; McDermott 2004; Jervis 2017). In all cases, humans and institutions are prone to develop compensatory cognitive shortcuts.

 

In addition to the problem of uncertainty and decision heuristics, there is little reason to believe that even if AI increases military power, it will alter the balance of resolve at the core of most international crises. New technology can change military power, but war is and will remain a continuation of politics by other means.

 

At the strategic level, questions about credibility and reputation are key parts of actual military power. Weaker states tend to resist even the most credible military threat when they view their survival as at stake (Haun 2015, 3). In other words, even if the United States were the first to field AI-enabled systems en masse, the resulting increase in military power will not guarantee strategic outcomes. If national survival is at stake, small states will still likely resist drone swarms, and there will be a unique bargaining power associated with a society’s willingness to suffer. Military force, as means, tends to only produce outcomes based on its alignment with political objectives, the ends. Decision makers will look to either the past actions of their adversary or, more likely, the current context to weigh their options, not military power in isolation (Press 2006). Furthermore, reputation plays a role (Sescher 2018). AI might increase a state’s power, but reputation is a political effect based on status, past behavior, and the political context. Just because Vladimir Putin mobilizes an autonomous motorized rifle brigade on the border of Ukraine, it doesn’t mean that Ukraine will back down, especially if they expect future threats. These studies tend to find credence in empirical studies that show the balance of military capabilities does not affect militarization historically (Maoz 1983; Stam 1999; De Mesquita 2013).

 

Finally, it should be noted that AI technologies will inevitably shape new barriers to gauging intention, both within military bureaucracies and between national security establishments. Particularly in the foreseeable future, it is likely that deep learning employed to predict intention will remain highly correlative and prohibitively dependent on a range of design factors that emphasize the use of (often humancompiled) data on historical patterns and the parameters of select modeling practices. An AI system set to consider potential combatants in Afghanistan, for instance, may be trained from legacy data that unreasonably excludes consideration of nonmale and older demographic profiles. This is quite clearly prohibitive from an analytic perspective. Likewise, as we discuss further below, this can trap dangerous bias and flawed assumptions in the machine. At the same time, it is likely that massive investment in AI observed from abroad will lead decision makers to perennially overestimate the degree to which they might be singularly the focus of prediction efforts. Such psychological ripple effects of adaptation to the landscape of AI-augmented military processes in international affairs are dangerous, as they both further diminish the reliability of narrow AI systems and incentivize an arms-race mentality based on “perfecting” machine-learning advantages for military forces.

 

Algorithmic Enhancements or Ghosts in Our Machines?

 

How AI affects the balance of military power in the international system could be a defining question of the twenty-first century. Yet, the technology alone will not change how states prepare for war, coerce their rivals, or fight their battles. In reality, as Biddle notes, the interaction of technology and military power is a complex affair wherein human cognition and structures recursively subsume, reference, and shape innovation. Theoretical generalizations about machines that learn, sense, and move will miss the “nonlinear, contingent and. . . . serendipitous” ways in which new hardware interacts with human institutions and judgment (Murray and Millet 1998).

 

It is likely that the promise of AI will likely be offset by the complexities of human institutions and judgment. Modern defense bureaucracies will still wage pitched battles over turf and resources that slow, if not outright undermine, the adoption of AI applications. Service identity, ethos, and norms will intervene to limit the decisions and tasks soldiers assign to machines. In many ways, more transformative technologies like drone swarms are more likely to see rapid adoption for a range of purposes as they fit within the existing procedures assigned to precision strike. Less likely is the rapid, militaries-wide integration of deep learning and human judgment for the purposes of planning future wars, understanding adversary intentions, and ultimately using force in pursuit of political objectives.

 

That said, it is undeniable that we stand at or near an inflection point, where the critical mass of advances in information and robotics technology, mathematics, and computational infrastructure portend a near-term set of unprecedented changes to the shape of military power. The core takeaway to be gleaned from the discussion of limitations on calculation and integration described above, we argue, is a reasonably simple one. Both researchers and practitioners must consider the degree to which AIfully integrated across military functionsmight present in international security affairs as a far-reaching ghost in the machine. Insofar as operators and decision makers will often be unable or unwilling to consider the manner in which AI systems come to conclusions, complex machine-learning foundations stand to detach human deliberation from empirics. A commander or civilian leader fed probabilistic data about an impending attack might not know enough to question the validity of the conclusion. Likewise, such a leader might not care to question such a conclusion for fear of ethical misconduct in the face of seemingly reasonable, actionable intelligence. This dynamic would likely only be exacerbated by the speed of AI systems and by automation as complex assessment and detached agency prompt implemental reactions in security decision-making.

 

This concern about how AI limitations might manifest to distort control of deliberative and executive military processes fundamentally points to a range of accountability and motivational challenges linked with the integration of new technology out into the future. Prospects for instability stemming from AI are likely to be improved by the way in which rapid and uncoordinated adoption of expansive platforms complicates workforce education, encourages yet further breakneck development and reduces incentives to question technology for want of maximal efficiency. And yet, at the same time, the enhancements to effectiveness that accompany the emergence of new algorithmic ways of war cannot be overlooked. A soldier must know the confines within which technological outputs should be questioned, as should commanders and civilian leaders. Moreover, planners and operators must be compelled to harness AI technology within the confines of restraining mechanisms for oversight, something that necessarily implies making narrow AI systems even narrower. Adoption of AI systems must, in short, be shaped by human systems designed to mitigate pathologies of delegated intelligence. Doing so will not only shield militaries from the negative externalities of AI ghosts internally but will also clarify the signals to be interpreted by foreign adversaries.

 

Conclusion

 

Identifying the intervening factors likely to distort how different states, as well nonstate actors, wage algorithmic warfare is the first step toward developing an analytical framework for studying how AI will shape security relations. Security scholars owe practitioners reasoned deliberations, from experimental studies to crucial historical cases that help visualize and describe how any transformative technology collides with human institutions and judgment. Leaving the future to technological determinists promising risk-free war by AI swarms and activists warning of “slaughter bots” risks cultivating threat inflation likely to distort defense expenditures and security policy.

 

This point has particular meaning in the context of the emerging debatea debate that parallels so many others on the merits of new technology as revolutionary or notover the proliferation of AI. Are we likely to see AI arms races as countries push for more sophisticated means of automating deliberation and implementation in war? On balance, the admonition of Horowitz, Kreps, and Fuhrmann in their oft-cited work on the proliferation of unmanned drone platforms seems appropriate and applicable even for AI. At present, we simply do not face a transformative enough threat in the form of emerging sensing, learning, or robotics technologies to imagine an arms race concentrated on discrete military capabilities. Simply put, AI does not yet promise to change states’ abilities to prevail in major conflict. That certainly may change in years to come. However, here again, it is worth considering the main argument we presentthat institutional context presents as a more serious form of risk linked with use of AI in war than does any near-term application of the technology itself.

 

In summation of the analysis in the sections above, we suggest five discrete directions for future research. First, scholarship on the shape of AI-enabled international conflict in the near- to medium-term would do well to consider the legal constraints on automated lethality and attribution that constrain Western militaries’ application of new information technologies. Second, and relatedly, scholars might effectively adjudicate on the viability of deterrent and coercive frameworks for managing risk in international affairs by extrapolating where key adversaries are likely to push the envelope in attempting to shape favorable norms of autonomous interaction in as-yet-uncontested space. Third, it is absolutely imperative that efforts to bridge the gap between practice and academic theorizing focus on the lessons of history when it comes to innovation and the navigation of socio-institutional barriers to adoption. Fourth, there is great need to avoid ignoring the potential bottom-up sources of AI usage and development. Much disruption and rapid advancement in the employment of machines that learn, sense, and move will come from the private sector, as well as from the improvisational experiences of military suborganizations deployed in conflict. And finally, scholarship on international security must keep its eye on the ever-present cognitive-psychological human determinants of AI adoption. Here, the goal mustat least in large partbe to understand how interactions between humans and systems that mimic human functions shape operational outcomes in the form of preference formation and trust in the underlying technology.

 

With each of these imperatives, fortunately, an artificial-intelligence revolutioneven one focused on narrow technologiessimply demands new interface with established conceptual traditions in the extensive literature on international politics and security. The task now before IR scholars is to employ this immense toolkit to grapple with critical questions that link artificial learning to human cognition.

from The Decoder : https://the-decoder.com/ai-in-war-how-artificial-intelligence-is-changing-the-battlefield/

 

반응형

관련글 더보기