상세 컨텐츠

본문 제목

Blood and robots: How remotely piloted vehicles and related technologies affect the politics of violence

Political Science

by 腦fficial Pragmatist 2023. 3. 15. 15:56

본문

반응형
ABSTRACT
New technologies such as Remotely Piloted Vehicles (RPVs) make it possible to remove human beings from direct involvement in combat. How will this evolving dynamic affect the practice and purposes of political violence? Will conflict become ‘costless’ in human terms as machines replace people on the front lines or will the logic of war continue to call for human sacrifice? While considerable attention has been devoted to the role of technology in transforming warfare, little is known about how new modes of combat will affect established motives for using force. I explore these political dimensions of new modes of conflict, drawing three basic conclusions. First, to the degree that substituting machines for humans lowers the costs for fighting, conflict will become more frequent, but less definitive. Second, in a reversal of previous trends, battlefield automation promises disproportionately to revitalise ground elements of military organisations. Finally, regrettably, new technologies should weaken inhibitions against targeting civilians.

 

Introduction

 

Human beings have used violence against one another for as long as there have been human beings. Violence enables actors to conquer or compel. Because violence works, it has remained a critical tool of politics. A big part of the history of civilisation is wrapped up in the evolution of inflicting, or avoiding, harm. The human experience with violence as a tool of influence has therefore always been double edged. Harming others meant exposing oneself to the hazard of being harmed.

 

Today, the link between imposing and incurring harm has begun to fray. For the first time, a growing number of actors will be able to inflict damage or death with little prospect that those causing harm will themselves be subject to death or injury. Unmanned Aerial Vehicles (UAVs), like the US MQ-1 Predator and MQ-9 Reaper, are among the most recognisable examples of a class of systems (Remotely Piloted Vehicles [RPVs] or ‘drones’) that physically separate their operators from the battlefield, conceivably making war safer, at least for one side. These ‘warrior’ machines have already begun to supplant human soldiers, sailors and airmen on the battlefield Military automation has evolved to a point where it is possible to contemplate conflicts in which human beings are no longer directly engaged in combat. Can warfare be ‘costless’ in this way and truly remain war?

 

Rather than debating definitions, it may be more rewarding to assess the formative issue of how removing humans as agents and/or targets of violence affects the functionality of conflict. Approached in this way, ‘costless war’ ceases to be a separate subject of speculation and debate, becoming instead an avenue for new insights about warfare generally. Interested observers will wish to know how military automation is likely to affect the purposes to which war is typically applied.

 

Individuals, groups and societies use violence to conquer or compel. Yet, there are other ways to achieve the ends for which force is employed. Indeed, most disagreements are addressed through talk, rather than action. Adversaries typically benefit if they can forge the same treaties or tacit bargains that terminate contests before fighting begins. In the absence of fighting, however, one cannot know for sure what force will produce. It is tempting for competitors to exaggerate or become excessively optimistic about their resolve or military acumen. Thus, adversaries sometimes fight rather than negotiate because actors disagree about who will prevail if force is used.

 

The theory of war is predicated on the (uncontroversial) claim that conflict is costly. To the degree that war can be pursued with minimal impact on human life or livelihood, the assumption that war is costly binds less tightly. We might then expect to see important changes in behaviour as political actors adjust their use of violence to new conditions or look for new ways to inflict costly harm. Indeed, we have already witnessed some of these changes, as ‘drone war’ is conducted across international borders without official acknowledgement that war is underway. In short, costless conflict must be made costly to remain war, and war is unlikely to disappear until humans stop competing. To the degree that automation removes humans from the battlefield, violence will shift as well, increasing the temptation to target civilians. Trends that have led to a reduction in territorial warfare in modern times may also be reversed.

 

In the sections below, I first review the theory of war on which a logic of ‘robot war’ must presumably be fashioned. I also sketch a simple model of the political economy of national security: Capital has increasingly been substituted for human labour in the makeup of modern militaries. Nevertheless, the need for human cognition (asset specificity) in the line of fire remained a critical binding constraint. Combining these elements allows me to offer an initial theory of the politics of automated conflict. I conclude with a discussion of the implications of my previous speculations.

 

Politics by other means

 

War is a venerable and effective political tool, an authoritative method for the allocation of value, or what Hannah Arendt referred to as the ‘final arbiter’ in international affairs. Yet Arendt also argues that there is ‘no substitute’ for war. Since war only occurs occasionally, this assertion appears problematic on its face; substitutes must predominate for war to be episodic. The field of international relations has been so obsessed with accounting for the existence of war that it has failed to address its episodic nature. The causes of war must reside in whatever explains why the factors that typically substitute for violence in political affairs occasionally become inadequate as final arbiters.

 

War involves mutual costs and zero-sum stakes. Fighting is wasteful, giving rise to a widespread preference for voicing threats rather than performing violent deeds, even at some cost in terms of the initiative in combat. Fighting infrequently becomes necessary when threats are not believed, and when the stakes are deemed sufficiently important and timely to justify war. The need to conserve power creates negative peace, and some issues or ‘values’ are literally not worth fighting over. The high cost of war also creates a mutual interest in minimising violence as a final arbiter.

 

Classical conceptions

 

Behavioural conflict generally involves threats or acts of violence designed to conquer or compel. Conquest is the physical appropriation of property or territory, or the subjugation of people. In short, it is theft or predation. On land, physical space can be acquired without population by evicting or murdering existing inhabitants (genocide, ethnic cleansing). Property can also be appropriated or populations subordinated or enslaved without actually controlling space (raiding).

 

Coercion in contrast consists of persuading others to accommodate one’s preferred outcomes without directly controlling affected places, property or populations. Coercive threats or deeds involve a critical quid pro quo; I want something that I either cannot, or choose not to take directly, but which you can supply, if properly motivated. Coercion is thus influence rather than appropriation. A mugging is coercion if the mugger says ‘give me all of your money!,’ even when the mugger inflicts injury, provided the victim determines whether to comply. A mugging becomes conquest when the mugger knocks the victim unconscious and rummages through his or her pockets.

 

The purposes and impact of the two mechanisms differ with ends. Conquest works best in disputing tangible assets (people, places, things). Coercion is necessary for intangible assets (processes or policies). Taking ‘stuff’ does not require the consent of the current owners, though coercion may be less costly. If, however, the intent is to alter the behaviour of individuals, groups or societies, then the nominal consent of actors is intrinsic to the objective and coercion need be applied. One cannot conquer ideas but could coerce restraint (at least in public) from those who espouse them.

 

There are indications that modernity has shifted the focus of governments away from conquest and towards coercion. Researchers have also documented that territorial conquest tends to involve more intense dispute behaviour. Conquest is more often total war, at least locally, while coercion can be much more limited in scope. Competitors can agree or disagree to varying degrees about preferred policies, while property rights are necessarily mutually exclusive (rival), and thus more conflictual. Two actors who both desire the same policy are in a much different relationship than two actors who both want the same territory.

 

Bargaining and conflict

 

Bargaining theory evolved from the recognition that classical accounts neglected to distinguish between necessary and sufficient conditions for war. The fact that nations, groups and individuals can resolve differences through force does not imply that they must do so. Typically they do not. Indeed, the fact that war could ensue is itself a substantial motive behind the search for other ways to address disagreement. Most potential contests among adversaries never emerge because all opponents benefit from pursuing less costly or risky methods of arriving at settlements.

 

Wars typically end at some point, with a disposition or settlement that represents the new status quo. With settlements looming at the end of almost every contest, why not simply agree to the bargain that war will eventually produce, before a contest begins? Adequately addressing this need for the costly intermediate step of conflict then amounts to a logical explanation for war.

 

Details three conditions that could prevent ex ante bargains from being forged. First, the issues at stake may not be divisible. Haggling over the stakes may be pointless if they physically cannot be divided up or if dividing them destroys value. The Old Testament offers the parable of two ‘mothers,’ both of whom claim the same infant. King Solomon proposes to cleave the baby in two, a compromise that is worth considerably less to either mother than a half interest in a whole baby. Fearon discounts this explanation because disputants can generally resolve indivisibilities through side payments. Opponents in the Spanish-American War, for example, settled control of the Philippines when President McKinley agreed to pay $12 million to the Spanish crown. If instead Spain and the United States could not agree on control, fighting might have to continue until one side or the other obtained all of the disputed Island. Military decisions are rare, though indivisibility could explain why some disputes appear intractable.

 

A second explanation for war involves the effect of anarchy in requiring self-enforcing bargains. A rising power may be tempted to agree to terms temporarily. As it grows powerful, the rising state can then insist on improved terms from its adversaries. Recognising these incentives, a declining state can prefer war today to defeat (or compromise) tomorrow. These ‘commitment problem’ wars are genuine tragedies because parties would prefer an enforceable bargain.

 

Commitment problem wars should end once fighting allows opponents to commit to a bargain or when one side has everything it wants and the other can no longer resist (military decision). The former is a relatively narrow set of circumstances in which the damage inflicted and incurred means neither party will grow in power relative to the other. More often, war creates new commitment problems. The latter is again unusual empirically; few wars end in military decisions precisely because combatants see where the contest is headed and forge deals that avoid further fighting.

 

This leads to the role of information in warfare. Combatants learn as they fight. One way in which war can set the stage for the end of conflict is by informing competitors about who is likely to win, and by how much, if fighting continues. As in the Gulf War, for example (negative), peace obtains once disputants can agree on how fighting is likely to progress, should fighting continue.

 

Taking this argument back in time to the outbreak of war, adversaries generally disagree about how much victory one side is likely to achieve, and at what cost. This does not require misperception, mutual optimism or even irrationality, but uncertainty. If competitors do not know what war will yield, and cannot resolve uncertainty because bluffing offers individual-level advantages, then war can occur as opponents anticipate different war terminating bargains.

 

The political economy of national security

 

Organisations produce ‘security’ in a manner not unlike the way that factories manufacture goods. Different inputs to production (factors) are introduced to create the ability to deter, defend or to attack others. Making war consumes labour, capital and other productive factors. Technology determines how these factors mix optimally to produce more security at a lower overall cost. The use of different factors also involves tradeoffs and synergies. Tanks can replace soldiers, but each is also improved by the availability of the other. Declining marginal returns reflect the understanding that tanks without infantry or infantry without tanks are not as effective as a mix of both. Similarly, replacing soldiers with robots has economic, and possibly different military, opportunity costs.

 

Increasingly, automation is supplanting humans in civilian manufacturing, especially where capital is abundant and labour relatively scarce. Until recently, machines lacked the ability to perform complex tasks. Futurists have often got modernity wrong by paying more attention to the effects of technological change than to economic criteria of cost and efficiency. Predictions that automation would supplant human ‘grunt work’ were not entirely accurate. Robots remain costly. They still cannot do many things that humans do. Instead, robots have tended to replace humans in repetitive but precise tasks, while humans do many skilled activities. Paradoxically, people continue to dominate in unskilled jobs, where workers remain cheaper than expensive automation.

 

Much as civilian employers prefer to replace workers with robots for dangerous or difficult tasks, armies have long sought to substitute machines for people in the line of battle. This process has been only partially successful. While armour, artillery and airpower substitute capital for labour on the battlefield limiting humans directly involved in combat automation in battle is bounded by the need for human intellect on the battlefield. Combat involves an enormous number of judgements and decisions. Despite incentives to the contrary, it just has not been technologically feasible to automate combat. Human intellect, judgement and supervision are still required in making war.

 

War in the absence of human beings: To boldly go

 

War without human combatants has never occurred, and so there are no examples to guide analysis. One way to assess unprecedented events is to look for contrasts that are less difficult to ‘unearth.’ In the first season of the original Star Trek television series, Captain Kirk and the crew of the starship Enterprise confront a pair of planets enmeshed in virtual war. Actual kinetic conflict has become so destructive and destabilising that planetary leaders have replaced physical force with artificial simulation. While synthetic war spares buildings and infrastructure, citizens still succomb, assigned to ‘disintegration booths’ by a computer model conducting hypothetical enemy attacks.

 

Since the Star Trek computer simulation is costly, it can take on the role of Arendt’s final arbiter, provided that the uncertainty precipitating conflict involves competitors’ resolve, rather than their capabilities. Any process of competitive risk-taking or harm-absorption can adjudicate among actors’ value for the stakes, as eventually one side can no longer tolerate the costs involved and chooses to concede. However, such a process does not provide information about relative capabilities, since the actions and reactions of simulated combat are artificially separated from the actual military potential of adversaries, as well as the expectations of relevant political authorities.

 

The opposite set of conditions confront nations involved in automated combat. Fighting among robot armies presumably provides information about relative capabilities, but would only reveal information about resolve if the material costs of combat are high relative to the stakes. However, the fact that machines are being substituted for human labour in combat implies that costs, while not trivial, are lower than for labour-intensive warfare, making automated conflicts less informative.

 

Returning to Star Trek, a core problem is the lack of agency. Nations that ‘disintegrate’ their populations are presumably strictly worse off than those that find ways to avoid, undermine or defy agreements mandating these actions. ‘Cheating’ death is in the national interest, especially when targets are productive or socially important. Normally, this is accomplished through deterring or defending against attack, or by protecting citizens from the effects of bombing or invasion, just as more precise or lethal targeting increases harm to an opponent. The deal between warring worlds in Star Trek is thus implausible, given incentives to cheat and the non-enforceability of the agreement.

 

In a war, military skill and capabilities are demonstrated through the development and implementation of strategy. Even an ‘accurate’ description of the capabilities of the two sides in a computer simulation is subject to ambiguity, error and especially manipulation. Any method capable of favourably altering outcomes or redistributing harm would constitute strategy. ‘War’ in the Star Trek episode would in all likelihood consist of cyber attacks or other actions designed to affect results of the simulation. It would also be difficult for opponents to agree on how to simulate war, since these details determine the distributional effects associated with Arendt’s final arbiter. Modelling assumptions affecting one side or another would themselves become subjects of dispute, since negotiations over parameters of the simulation would predestine the fate of populations on both planets. Real fighting would likely break out over how to simulate fighting, even as the simulation itself would become redundant if enemies agree on the likely consequences of either simulated or actual war. Successful negotiation of the terms of artificial conflict should logically lead to no conflict at all, since in agreeing to the conditions of the simulation both sides must construct a consentual understanding of how the contest will evolve and what costs and outcomes may ensue. Put another way, the treaties or tacit bargains that end most wars operate in practice much like the consensus software program required to continue to conduct a contest by computer proxy.

 

One need not venture to outer space in search of weapons that kill people, but not property. Biological and chemical weapons kill but do little damage to infrastructure. Samuel Cohen, the ‘father’ of the neutron bomb, was deeply affected by the devastation wrought by the Korea Conflict. He strove to develop a device that would kill combatants without contaminating territory or damaging structures. ‘If we’re going to go on fighting these damned fool wars in the future, shelling and bombing cities to smithereens and wrecking the lives of their surviving inhabitants, might there be some kind of nuclear weapon that could avoid all this?’

 

Cohen described the neutron bomb as ‘the most moral weapon ever invented.’ This claim of course rests on the conviction that enhanced radiation weapons end wars quickly and with relatively little harm to populations. Apparently the United States government first to develop ER weapons did not agree. The neutron bomb was adopted with some reluctance, amid considerable controversy, and never in a form that achieved the effects anticipated by Cohen and other weapons designers. It was feared that the neutron bomb would trigger a wider nuclear exchange, and thus just destroying combatants was counterproductive, if targeting populations was likely to follow.

 

The search for peace through better firepower is nothing new. Numerous technological innovations have been sponsored by humanitarians seeking to limit killing or even end warfare all together, paradoxically by increasing lethality or firepower. The ‘Gattling Gun’ was famously developed by a physician horrified by the carnage he had witnessed on civil war battlefields. Richard Cobden voiced an opinion, widespread in the nineteenth century, that mass production made war prohibitive. ‘Should war break out between two great nations I have no doubt that the immense consumption of material and the rapid destruction of property would have the effect of very soon bringing the combatants to reason or exhausting their resources’

 

Interestingly, Cobden’s comment makes no reference to human casualties. Innovations in warfare have consistently been designed to increase the lethality, accuracy, firepower (augmenting harm inflicted) or protection (decreasing exposure to harm incurred) of combatants. Still, all of these efforts occurred in a context where harm imposed and incurred remained connected. The elite members of the US and Soviet forces tasked with fighting World War III did not expect to survive their missions. These soldiers, sailors and airmen understood that their families, friends and much of the nation would likely perish in the conflicts that they would initiate and help to carry out. Recognition of the risk or even inevitability of loss in war is one of the major inducements to compromise and deterrents to conflict. The very prospect of harm that makes war appealing as a method of exercising political power also makes actors reluctant to fight. War is costly and risky.

 

In the Cold War, strategic deterrence initially involved counter value targeting. It was the prospect of mass killing that was said to discourage a more general confrontation between East and West. Although discussions of counter-force targeting became common in later periods, along with doctrines that were said to allow for more flexible nuclear response, the best available evidence is that no great disjuncture in grand strategy actually occurred Politicians continued to imagine and plan for a general nuclear exchange that would inevitably produce catastrophic loss of life, rather than just the destruction of military assets or capabilities.

 

Having sketched implications of pure killing in war, let us consider warfare that destroys stuff, not people. Science fiction, and our own imaginations, have spent much more time contemplating the consequences of increasing lethality. While robots are a common foil in science fiction drama, they are usually placed in opposition to human beings. Movies like Terminator and the Matrix have good people battling bad machines. What if no people are directly involved in the fighting, or if the ‘good’ people are at home, sitting in front of computer screens, rather than in the field? Little serious thought has been given to the social-political consequences of battles among automatons.

 

One analogue stems from non-lethal weapons. Incapacitating an enemy could be as effective as killing combatants and destroying equipment. However, something must still be done with the enemy in order to attrit capabilities on a more-or-less permanent basis. Returning enemy soldiers to their own lines in the midst of a war would be counter-productive. Repatriation of weapons also aids an adversary. Non-lethal weapons should be most effective in rendering a military decision of reducing an opponent’s ability to fight rather the will to resist.

 

It would also be wrong to imply that capital ever becomes unimportant to societies. A community that loses its industries, infrastructure and information systems will be deeply affected. Even if technological war only wrecks property, and not people, it will still prove costly. Yet humans have a special significance that even seems to increase in advanced societies. Labour in these societies is relatively scarce and thus valuable. Reducing human loss inevitably renders war less costly, especially among populations where machines are cheap and human lives are considered expensive. In the absence of a human toll for initiating conflict, civilisations will be less reticent to go to war.

 

At the same time, wars of this type will be less informative to the extent that they are cheaper. A nation that does not need to imperil what it values most in fighting will have more difficulty demonstrating its values. Contests will tend to drag on because fighting is not especially informative. To the extent that machines dominate the battlefield, the decisiveness of contests will hinge on the willingness of actors to inflict and incur human harm, generally by attacking civilians. The targeting of military personnel will continue to be attractive. However, cover and concealment are enhanced by the ability of military personnel to operate remotely, far from combat areas.

 

If war is truly costless, then there are no material incentives to refrain from fighting. However, with no financial, social or human consequences, war ceases to function as an organic and binding method for the arbitration of disputes. Peace could ensue, but politics will still require some means to conquer or compel. The recourse to violence reflects dissatisfaction with either the method or outcome of other means for determining winners and losers. Costless war must therefore trigger a search for other methods of imposing costs, even as opponents seek security and new ways to harm in return. It is again this relationship between hurting and being hurt that characterises conflict.

 

It does not follow that these effects are equally felt by both sides in a dispute, or even that individual actors feel compelled to respond to the symbiotic nature of conflict. If war imposes little or no cost on some practitioners but imparts harm to others, then it will be practiced freely, and frequently, by those that are asymmetrically free of its burdens. Today, as in the past, powerful countries fight less powerful nations and non-state actors more often in part because they can act with relatively little harm to themselves. Yet, relatively little information is revealed by the capable party engaging in such an exercise, since fighting is cheaper or easier for the powerful actor. The low-cost combatant has difficulty demonstrating resolve, while the high-cost combatant demonstrates resolve, but lacks the capability to conquer and must therefore rely on coercion. A decisive outcome favours the more capable power, so the weaker actor avoids decisive combat and both sides end up pursuing attrition strategies. The indecisive nature of asymmetric conflict also increases the ability of both actors to demonstrate resolve through enduring in a low-intensity combat environment.

 

The shift in asymmetric warfare to low-intensity contests of longer duration reflects the strategic nature of conflict and the role of agency in adjusting to one another’s advantages. Practice in war must reflect mutual best strategies, or different strategies will be adopted. Alternately, actors will cut deals to avoid or end contests in which they clearly expect to be at a significant disadvantage. Weaker adversaries that confront opponents in decisive battle will tend to do more poorly than those that adopt asymmetric strategies. Capable countries that fail to deploy their superior capabilities will less often prevail. The comparative advantage of the weak is to go for a longer contest in which a stronger enemy may reveal itself to be less resolved. But this also means that stronger opponents will be able to demonstrate resolve, where such resolve exists. The comparative advantage of the capable is to seek large-scale combat, even by exposing friendly forces to tactical disadvantage, much as both France and the United States did, respectively, at Dien Bien Phu and Khe Sanh.

 

Circumstances generally encourage a countervailing asymmetry between resolve and intentions. Capable but less resolved actors are better off expediting, and escalating, contests, while more resolved but less capable actors demonstrate resolve through patience. Similarly, states with limited aims behave differently from those with more ambitious goals. Asymmetries should force adversaries to consider responding differently, leading to more combinations of best strategies, depending on circumstances. Capable states with limited aims facing less capable opponents may prefer recurrent conflicts (rivalries) to total war. Israel, for example, could have conquered its warring neighbours, but this would require it to administer large Arab populations. So, Israel wins wars but its limited aims ensure that its enemies survive to fight another day.

 

Aggressors adopt actions that play to their advantages, just as targets prefer to seek their own best ways of reacting to, and imposing harm on, an adversary. Russia leveraged strategic depth to make Nazi aggression unsustainable in World War II. Estonia, conversely, cannot defend itself against occupation by one of its larger neighbours but has used insurgency to make occupation costly. Where these advantages differ, so will the means of force or coercion. At the same time, asymmetries in the intensity of motives should produce different adherence to this general tendency. The fact that I accidentally step on an occasional ant tells you very little about my love of gardening but a great deal about why ants live underground. Enemies will look for, or more precisely gravitate towards, symbiotic relationships between the cost and effectiveness of contests even if this is not where fighting begins. In short, costless war for one or both sides will tend not to remain costless.

 

The consequences of military automation

 

The brave new world of war using remotely operated machines is complex and in its earliest stages. I discuss implications of future war through two illustrative scenarios designed to clarify relevant issues and identify important dimensions along which conflict is likely to change. They also highlight attributes of future war that, while a source of concern, are less likely to transform political behaviour.

 

One-sided techno-war

 

Imagine that one military force relies on robots to fight, while its opponent uses human combatants. This scenario will no doubt occur first as the number of actors able to field capable remotely piloted combat systems will initially be small, given the process of technological diffusion. Current US operations in Pakistan, Yemen and elsewhere are early examples of one-sided automated combat.

 

Relative success in military terms will still depend on how each deploys asymmetric strategies, how well high-tech hunts human insurgents and how effectively warm-blooded combatants exploit weaknesses in the technological force, whether these weaknesses result from technology or are the product of more traditional vulnerabilities. The manner in which automation influences how opponents fight is an interesting topic in its own right. However, this process is not necessarily critical in strategic terms, independent of other factors. With enough political will, a less sophisticated military force can overcome substantial material and technological disadvantage. Conversely, technology can make up for limited resolve, but technology is not a perfect substitute for the will to fight, if human combatants remain in peril.

 

One of the potentially revolutionary features of automated warfare is the degree to which technology may free one or both sides from the need to mobilise political support before deploying military capabilities. As recent events should make clear, technological powers are increasingly able to initiate, or to broaden the scope or intensity of, conflicts often with relatively limited internal debate. Uses of force will thus increase as automated systems reduce the risk of friendly casualties.

 

At the same time that automating combat reduces the costs faced by the technological power, it also reduces the ability of the country substituting robots for people to demonstrate resolve. Pilotless vehicles tell us very little about an actor’s willingness to face high costs or risks. Deploying automation rather than flesh-and-blood soldiers may even imply the opposite, since the side fighting remotely may or may not care enough to do anything but risk machines. As outlined above, capabilities strongly favouring one side tend to coincide with resolve favouring the less capable actor. The weak must compensate for a lack of capabilities with a greater willingness to risk or sacrifice wealth, territory or personnel. Indeed, opponents tend to concede disputes when one side is clearly favoured by both capabilities and resolve. The asymmetrically capable actor is typically both less vulnerable to casualties and more sensitive to the human toll of war. Asymmetric techno-war thus accentuates this basic characteristic; the technological power prefers not to risk casualties, while its opponent is limited in its ability to convert costs and casualties incurred into harm imposed.

 

Asymmetric warfare would seem to offer enormous advantages to the more capable actor. It does. Precisely because these advantages are often obvious, however many potential asymmetric contests never occur. Asymmetries of capabilities are typically matched with asymmetric stakes. Insurgents fight for their homeland while the capable adversary is projecting power far from home. To the degree that the capable power is willing and able to conquer, these contests are again brief. The invasions of Iraq and Afghanistan were accomplished quickly with little effective resistance. Here again, however, there is a tension between capabilities and motives. The wealth and economic power that make technological militaries possible also diminish the value of tangible assets (such as territory), while increasing the appeal of control over intangibles (such as policy or regime issues). The shift away from territorial conquest is a result of historic changes in the cost and value of occupation that make it preferable to buy foreign inputs to production. If Ancient Rome had had modern weapons, it might have conquered the entire Eurasian land mass. But to have modern weapons, Rome would have also needed the advanced industrial capabilities that make conquest far less appealing than commerce to modern economies.

 

If asymmetric warfare involves a test of wills, rather than a test of capabilities, then the outcome of the contest hinges on perceptions of the resolve of the less resolved actor. This is being strenuously tested if indeed the technological power prefers to deploy robots rather than humans on the battlefield. Asymmetric wars tend to drag on because time is a proxy for resolve and because at least one actor prefers attrition to decisive engagements; one side cares but is limited in what it can do, while the other side is less resolved, but is better able to inflict considerable harm.

 

Deployment of increasingly sophisticated automated systems appears destined to dramatically weaken the ability of non-technological actors to prevail in direct combat with technological powers. This would seem at first to be a linear extension of previous effects of technology on the battlefield (fire, manoeuvre, command-and-control, ISR, etc.). However, it is not the increased effectiveness of these systems over human counterparts that is their most salient feature (they need not be more effective, nor is it clear how such a comparison would be made). It is the ability of remotely piloted systems to limit exposure of friendly personnel to harm that is their greatest asset and most distinctive quality. Human combatants from the technological power already tend to be rare and so are harder to locate and interdict. The scarcity of human adversaries reflects supply and demand; technological actors suffer more from human losses and thus try to limit them, even as relatively low valuations for the stakes in a contest makes them more sensitive in general to war costs. As automation makes it possible to relocate human cognition away from the battlefield, valuable human targets will become scarcer still, breaking the bond between harming and the risk of being harmed.

 

Less technological actors must therefore succumb, or seek out new ways to prevail, often by choosing some other setting in which to take their fight to the enemy. One-sided robot wars are therefore destined to shift targeting away from the battlefield and towards the leadership, logistics, allies and ultimately the populations of the technological power. Put simply, techno-war should lead to more attacks against non-combatants through bombing, terrorism and related techniques.

 

The shift to attacking civilians results from circumstances and the logic underlying warfare. Historically, the technology of war did not allow for even the most sophisticated countries completely to remove their citizens from the battlefield. Whatever the marginal cost of human labour, the production of security required exposing some citizens to combat. The ‘tooth to tail’ ratio has steadily decreased, even as military labour has become specialised and use of ‘elite’ forces has allowed politicians to minimise national exposure to casualty risks. Still, some personnel were needed on the battlefield, particularly in the kinds of low-intensity combat championed by insurgents and weaker powers. This had the effect of deterring some forms of power projection by capable states, and of encouraging less technological actors to consider resisting stronger opponents on the battlefield.

 

If technology finally removes those responsible for imposing harm from direct combat, however opponents must look elsewhere to reimpose the relationship between harm inflicted and incurred. Since the role of combatants and non-combatants is defined in ethical terms, use of force against civilians is viewed by many as immoral. Still, this is a practical concern for less technological combatants; with no human enemy to confront in battle, the less technological actor must choose between attacking non-combatants and defeat. Conversely, to the extent that actors fail to overcome inhibitions against targeting civilians, they will be forced to concede to technological powers. Since disputes occur in places and over issues where contention exists (the parties must generally disagree about who will win and by how much in order for a dispute to occur), consensus about the outcome of a contest should lead adversaries to forge bargains, not to fight. In other words, rather than primarily affecting the frequency of conflict, automation should relocate contests; peace will break out in some places, while force to be used in areas previously considered marginal.

 

A technological power that utilises remote systems is saying two things about its preferences. First, it is emphasising its sensitivity to casualties in its willingness to substitute machines for people on the battlefield. Second, one cannot rule out the possibility that the technological state’s resolve is very thin, since it may have chosen to fight at least in part because automation lowered the expected costs of fighting. This also means, however, that the technological power could well be persuaded to quit the conflict after experiencing relatively modest harm. ‘Kill’ the machines and they will be replaced. Kill citizens or the citizens of allies of the technological power and the opponent might relent, much as the Somali warlord Muhamed Farrah Aidid convinced President Clinton to withdraw from Mogadishu by inflicting 18 American casualties. With few human combatants, wars against technological powers are paradoxically won by inflicting human carnage.

 

The shift in the target list for less technological actors is already underway. The tendency to remove humans from the battlefield will accelerate as technology makes substitution of capital for most or even all battlefield labour feasible. The asymmetry of interest that accompanies power asymmetries helps to obscure this dynamic. Because machines are not yet capable of replacing human beings in many roles, some human combatants persist on the battlefield. These combatants are especially appealing targets, which both limits the willingness of powerful nations to fight in many instances and also encourages the less capable actor to resist by targeting remaining enemy combatants. In the extreme, with no human adversaries available in the battle space, the less technological opponent will have no reason to expose him- or herself to harm. Almost as soon as automated combat becomes a reality, the enemy will find other ways to fight, leaving the battlefield altogether in favour of unconventional methods of conflict. Just as insurgent forces have learned to avoid the fires of combined arms warfare, hiding in the terrain or among civilian populations or alternately closing with the enemy to avoid the worst effects of air power and artillery, so too the strategy of the weak in the age of automated warfare will involve immersing combatants in urban populations, to blunt enemy advantages and to increase the ability of insurgents to inflict harm.

 

The advent of one-sided automated conflict will extend the duration of asymmetric contests. Just as factors that improve the prospects of victory for one side encourage opponents to look to other domains of conflict, so too the factors that improve decisiveness for one side lead opponents to find ways to increase the duration of a dispute. Much as with moving war outside the traditional battle space, one-sided techno-war promises to extend contests temporally, retarding dispute termination. Precisely because directly confronting an automated force will prove costly and futile, a less technological enemy will go to great lengths to resist decisive engagements.

 

Politics and strategy can render fancy weaponry redundant. Lop-sided victory in the first Gulf war led to talk of a ‘Revolution in Military Affairs’ (RMA). However, recognition of US dominance produced either acquiescence or efforts to counter US advantages, initially through asymmetric strategies (insurgency, nuclear deterrence) and increasingly through asymmetric capabilities (stand-off air and sea missiles, tactical nuclear weapons). Precisely because everyone understood that the United States was going to win a conventional military contest, there was very little observable evidence of US acumen in this arena. Less resolved actors simply conceded issues they thought might trigger a confrontation with the United States, while more determined adversaries chose to fight in ways that prevented US forces from exercising their avowed advantage.

 

Automated systems will have similar effects, even if they fail to prove as unrelentingly effective in asymmetric warfare as they appear destined to be today. The ability to defeat an opponent in open combat with very little cost or risk to a nation will mean that less resolved adversaries will be defeated, concede or reconcile themselves with available compromises. Victory against automated systems will be rare precisely because their low cost in human terms makes defeating them fatuous. When, as may be the case, the technological power is marginally committed to victory, the destruction of equipment can prove sufficient to render defeat, and so ‘killing machines’ will sometimes prove effective. However, the nature and value of machines designed for combat conditions suggest that, for many issues, material destruction will not suffice to dissuade or deter. Submission, avoidance or terrorism will then be appealing asymmetric responses to military automation.

 

Two-sided techno-war

 

The image that many may have of symmetric future technological warfare is perhaps an aggrandised version of the robot wars one can watch on youtube or television re-runs. However, the political objectives of conflict remain; actors threaten or use force to obtain preferred states of the world. In the absence of a military decision, it is the losing side that must decide whether a contest has ended. Combat largely among remotely piloted systems could potentially result in a negotiated settlement, but only for very limited ends. Techno-war can serve as the ultimate arbiter only if both sides accept trial by robot combat. Much like the warring planets in Star Trek, machine-based contestation hinges on embracing this artificial and limited form of arbitration as final. Alternately, disputants may be tempted to pursue conflict through more traditional, sanguine methods. If the ‘loser’ cannot reconcile its status, then war will again require a blood sacrifice. One or both sides will eventually move away from attacking machines as targets to achieve victory, or to avoid defeat.

 

Techno-war creates conditions that could separate out the human and material costs of conflict. War planners and politicians are capable of asserting moral standards such as the obligation not to kill civilians and to treat combatants humanely while still applying force in such a way that the intended effects of ethical objectives are frequently compromised. Techno-war makes it more difficult to blur these lines because it separates humans more thoroughly into combat and non-combat roles. A battlefield dominated by machines may produce some accidental human casualties, but combatants can also fight in places where humans are sparse. This may be much less fanciful than it sounds. Given that part of the reasoning behind military automation is to reduce exposure to humans, symmetric techno-war implies that participants on both sides share these preferences.

 

Unfortunately, if this logic was generally true, then conflict would not occur in the first place. War is a competitive struggle with a strong zero-sum component. Factors that benefit one actor or the other disproportionately are naturally counterproductive for an adversary. Fighting will be limited to machines only if no disputants are willing to target anything but automated enemies. If one country only attacks enemy robots, then one way that an opponent might improve its prospects for victory would be to attack something other than robots. Indeed, the minimax logic of conflict implies that, to the degree that one side prevails in combat using a given technology, strategy or target set, its opponent must be comparatively better off adopting contrasting weapons or targets.

 

Suppose that two nations deploy armies of RPVs. Imagine further that one side’s machines eventually defeat/destroy the opposing RPV army. What then? The ‘losing’ side is of course free to accept defeat. However, it need not, and may not wish to. In limited war, the loser alone can hand victory to its adversary. The state that has fared poorly in automated combat must be aware that its adversary may be relatively more sensitive to human casualties. Conversely, even if one side no longer possesses a robot army, it can refuse to submit. Just as with one-sided robot conflict, there remains the prospect of human casualties. For its part, the successful actor in the purely RPV contest must consider whether it must offer its adversary generous terms to achieve peace, or widen its target list to compel the opponent to accede to a more thorough defeat.

 

If the prospective loser does not care too much about the stakes of a contest, then it might agree to make modest concessions without further coercion by its technologically superior adversary. Any costly act can reveal resolve; as long as the robotic combatants have value to their owners, damage or destruction is informative. However, destruction of assets specifically deployed as substitutes for humans on the battlefield implies a bounded loss. Replacing humans with machines is intended in part to make war less costly. The value of military machines may also be more predictable. To the degree that losses can be anticipated, they should prompt different bargains rather than leading to war, as both parties are better off with these compromises. The very fact that a contest occurs implies disagreement over the value of some aspect of the dispute. If adversaries are mutually able to price each other’s war materiel, ambiguity over the value of the stakes in a contest, or the likelihood or value of human casualties, must motivate the contest. To the degree that fighting with robots reveals only limited information about these values, other modes of warfare may ensue.

 

If the putative loser of techno-war refuses to accept defeat, then there is relatively little about the techno-contest itself that can force a resolution of the dispute. Defiance is a quality that is most meaningfully exhibited by the weak. As long as an enemy is willing to live with the consequences of persisting in a lop-sided contest, then there is nothing about what has happened on the battlefield that fundamentally alters political realities. Months of one-sided bombing in the First Gulf War by coalition air forces left Saddam Hussein with numerous wrecked palaces, but it did not compel him to withdraw from Kuwait. As long as coercion was the dominant strategy practiced by the US-led coalition, it was Saddam alone who determined the status of Iraq’s nineteenth province.

 

As the bombing campaign against Iraq illustrates, substitution of capital for labour has already produced an unintended transformation in warfare. Increased use of military capital allows technological nations to project power farther from home, increasing the range and number of issues and places in which the technological power can become involved. Contrary to the expectations and the twentieth-century liberals and th twentieth-century futurists, however, technology is gradually making war longer and less decisive, not shorter. While the technology of war itself greatly increases the destructive capability of sophisticated militaries, redeployment of personnel away from the greatest destruction minimises casualties, increasing engagements and reducing the informational value of fighting.

 

Modern clashes like the two Persian Gulf Wars, the Arab-Israeli Wars and disputes between India and Pakistan appear to demonstrate that the infusion of capital has necessarily increased the operational tempo and shortened the duration of war. Yet, a common feature of these contests is the contrast between military realities and political will. Wars of conquest in which the winner has limited aims can indeed be short. The loser is incapable of continuing to project power, while the winner has no wish to expand its influence or control. Conventional battle may have ended quickly as military stockpiles dwindled or one side rapidly demonstrated technological or tactical dominance, but these contests continue to simmer because disagreements about subjective value or relative resolve do not hinge on the availability of military hardware. The losers take a very long time to accept the realities imposed by technology precisely because the cost in blood does not match their value for the stakes. The losers bide their time while they rearm and adjust their tactics to extract proportionately more blood from their more technologically advanced adversaries.

 

Consider in contrast the IranIraq War or the conflicts of the former Yugoslavia. Adversaries with less finite aims and fewer inhibitions against human slaughter quickly turned their war machines on human adversaries when mechanical targets were unavailable, even as combatants and populations resisted where possible despite lacking access to high-tech weaponry. In the 1950s, China fought the most sophisticated technological power of the age to a standstill with human wave tactics. United Nations forces resurrected massed artillery fires reminiscent of the First World War in an attempt to blunt the onslaught of poorly armed Chinese ‘volunteers.’ Limited war involving disproportionate technological destruction can be imposed, provided that the winning side is technologically dominant and has limited aims. However, the failure to decide disputes in blood means that the political basis for a dispute will continue to fester. In contrast, contests with technological winners where the loss of life is substantial will tend to be decisive and dispute-ending.

 

Excessive confidence in the ability of technology to determine political outcomes as opposed to military decisions is reflected in recurrent fallacies about air power. Victory from the air requires the active assistance of the loser. Suppose again that automated conflict occurs as described above and that combat ends after one side prevails on the battlefield. The nominal loser can simply refuse to comply with demands from its more successful adversary, just as Saddam Hussein refused to submit to coalition demands in response to the air war. At the same time, the losing side can begin the process of recovering from defeat by rearming, creating more (possibly more effective) robots.

 

Actors have the option of responding to insecurity with internal or external balancing. Disputants can build more weapons or seek out allies before, during, or after combat. The only way to ensure that an adversary will not increase in relative power is to destroy some or all of the adversary’s productive capacity and to scare off or entice away existing or prospective allies. Failure to consider how military automation will affect the balance of power would of course be a mistake. Defeating an adversary’s robots and then allowing the adversary to re-arm would achieve little of political value, since the costs involved will often be small relative to human casualties and because material losses would be temporary. Thus, unless a contest produces additional harm, causes the loser to fall behind militarily or prevents the loser from re-arming, victory is likely to prove fleeting.

 

How then is the victor in techno-war to prevent an enemy from building more and possibly better robots in the future? Destruction of an adversary’s military/industrial capacity has always been a critical objective in warfare. Historically, societies made war to undermine an opponent’s capabilities, and advantage themselves, by capturing territory and populations. The automation of war will witness the fruition of a long-term trend in which the basis for a country’s military power is concentrated in its factories, rather than in the homestead, the farm, nurseries and schools. As such, territory will be increasingly marginalised in acquiring and maintaining military capabilities, or in altering the balance of power. Rather than taking possession of productive land or populations, the ‘winner’ of robotic war will be tempted to capture or destroy an enemy’s industrial capacity.

 

Nations have frequently resorted to decimation or appropriation of productive factors in order to win advantage or compensate the homeland. There is a difference in emphasis, however, if productive factors for war fail to include or make limited use of human labour. If military technology renders human combatants marginal, then victors need not carry out the kinds of depopulation efforts that sometimes characterise victor’s justice. Ethnic cleansing, forced migration, mass rape and other despicable activities can be traced in part to efforts to undermine the fighting spirit and martial capacity of a temporarily vulnerable but enduring foe.

 

Bombing or otherwise destroying or appropriating enemy industrial facilities still does not directly involve targeting human beings. It may be possible, for example, if robot factories are themselves staffed with robots, to avoid again having to kill or injure human adversaries. Yet, this simply augments the basic logic of costless war. Conflict that shifts the balance of power is certainly important, but only within the context of the acquiescence of the nominal losing side. An adversary shorn of even its future robots can still say ‘no’ to compromise. Getting to ‘yes’ against a determined adversary will again involve imposing additional war costs in human terms.

 

Military automation changes the distribution of war costs, making it easier to pursue force at a much lower risk in the form of battlefield casualties. However, automation does not imply improvements in the ability to resolve more intense disagreements, precisely because bigger issues require a willingness to impose higher costs on an enemy in order to prevail. Because the ‘losing’ side in a robot contest will only concede when the issues in dispute are not particularly critical, even nations wielding highly effective remotely piloted systems will have to consider targeting humans, precisely because humans are valuable, especially for societies pursuing military automation. Again, with no humans on the battlefield, looking elsewhere for targets becomes a practical necessity.

 

War must involve the prospect or practice of human casualties to be highly costly and coercive, especially in the face of military automation, where military labour is scarce and protected. Historically, human combatants have either been numerous or valuable (or both). If techno-war is distinctive in removing human combatants, then its crucial feature is that human targets will not be available on the battlefield. In political terms, two-sided techno-war will tend to reward targeting of enemy civilians, away from the battlefield (or a broadening of the term ‘battlefield’).

 

Sadly, this is neither new nor speculative. The principle war-fighting strategy of the allied powers in World War II was effectively to kill civilians in extremely large numbers. Both Britain and the United States entered World War II as technological powers, intent on bringing war to the enemy and avoiding the heavy casualties of trench warfare by exploiting a comparative advantage in capital-intensive, multi-engine bombers. First the British RAF and then the US Army Air Force learned that precision bombing was impractical under wartime conditions. Aircrews were hardly ever able to accurately bomb military targets. Much like the massed fires of World War I (and Korea), the allies conducted massed bombing raids in which it was hoped that at least a few of the hundreds or thousands of bombs dropped would hit military instillations. However, massed bombing in built-up areas was in practice targeting civilians. This led to a rapid evolution of applied ethics, as the Allies decided that bombing civilians was justified by Axis atrocities, enemy recalcitrance, and the need for victory. The irony of course is that a technological solution intended to limit casualties ensured that many more civilians were killed or injured. It was precisely the inability to coerce an opponent from a distance by destroying equipment that led principled governments to intentionally adopt less-and-less discriminate approaches to targeting.

 

The purposive mass killing of enemy civilians in World War II is a highly distasteful aspect of the Allied war effort. It is important to emphasise, however that it was conducted by leaders and militaries that had consciously prepared for a very different kind of war, one in which technology would allow the Allies to minimise all types of casualties. Leaders fielding automated armies in the future will face similar ethical and practical challenges. Those that imagine using precision to avoid killing civilians may well find that the ability to hit what one is aiming at does not necessarily make war any less sanguine in the end. Attempts at ‘dialing in’ harm have in the past backfired. Examples such as McNamara’s incremental bombing strategy in Vietnam failed to compel and/or enabled the adversary to adjust to the threat. Precision makes war less costly to all but the intended target, but it does not mean that war can be costless. For the reasons outlined above, killing will remain an important component of war, especially when an enemy stubbornly resists defeat.

 

Who gets killed in technological war (and why)

 

Ethical precepts require that civilians should not intentionally be targeted in war. Combatants and war planners are supposed to take the vulnerability of non-combatants into consideration in deciding how and whether to exercise force. These precepts have even led on occasion to leaders or combatants that hold their own populations hostage in an attempt to deter enemy aggression.

 

The standard of not intentionally harming civilians gained prominence at about the same time that advanced societies experienced a dramatic increase in the ability to inflict unintentional harm. Though these standards have been around for a considerable period, their impact on targeting has changed with evolving circumstances. Industrialisation in the nineteenth and early twentieth centuries saw the introduction of technologies that rapidly increased the lethality and range of fires. Before this, military violence was generally inflicted through the literal force of ‘arms,’ at or near arms length. In the sixteenth century, with the development of artillery and siege warfare, civilian bystanders faced growing hazard of injury or death. Unintended harm against civilians increased exponentially with the development of indirect fires and more lethal explosive artillery. Bombing from the air in World War II saw the fruition of this dynamic. Commanders could not claim not to be killing civilians, but they could just plausibly argue that they were not doing so intentionally.

 

As the ability to sow destruction extended beyond visual range, dramatic increases in civilian casualties were not only possible but threatened the undying principle of protecting civilians from harm. The ability to destroy vastly outstripped the ability to only destroy intended targets. Efforts to kill combatants endangered civilian populations. Non-combatants became ‘collateral damage.’ Conveniently, human rights standards evolved that required combatants not to target civilians, but which do not preclude killing them as part of the process of attempting to kill enemy combatants.

 

At other times, the purpose of war was to kill civilians, undermining the enemy by weakening their means of production and advantaging the balance of power. Medieval warfare was often an exercise of pillaging, spoiling and decimating enemy populations, rather than battling enemy battalions. It was not always possible or convenient to target enemy military formations, which could fight back or were inaccessible behind walls and ditches. Certain forces also had a comparative advantage against civilians, such as irregular light cavalry or Allied bombers in World War II. Moral qualms about targeting civilians have not followed a consistent or linear path, unless one considers the interests and capabilities of combatants. Nations even use ethical standards in a propagandistic or constraining manner to achieve advantages over their opponents in both peacetime and war.

 

More recently, the introduction of precision targeting dramatically reduced civilian casualties, but civilians still die through accidents or as part of intentional strikes against military targets. Coalition forces operating in Iraq and Afghanistan have killed far more civilians by accident than Al Qaeda has with intent. The asymmetry of capabilities means that even seeking to avoid civilians can still lead to much more harm than less technological attempts to cause civilian bloodshed. Not intentionally targeting civilians also makes sense in military terms, when civilian populations are of relatively little value militarily or when an opponent does not much value the lives of its citizens.

 

The automation of war makes it possible in principle to avoid killing altogether. Shooting down remotely piloted vehicles or machine gunning a combat robot murders circuits rather than flesh and bone. To the extent that machines are substitutes for biological combatants and to the degree that fighting occurs in unpopulated places, future war could dramatically reduce human loss of life.

 

The question remains, however, whether the mere destruction of technology will prove adequate to accomplish the objectives typically associated with military violence. It has not in the past; Japanese military power in World War II was steadily attrited, even as that nation’s industrial might was reduced to rubble. Despite this, Japan’s leaders continued to prepare for an ever more formative defence. It was only the intentional annihilation of unprecedented numbers of civilians in two atomic bomb attacks that led to a radical reversal of these plans.

 

I have argued that automation (combined with precision) will lead to a resurgence in targeting civilians, precisely because few human combatants will remain on the battlefield. Civilians will be targeted because they are available and because enemies typically wish to win. World War I is rightly remembered as a land contest. The great naval fleets that had been the focus of so much pre-war manoeuvring and expense spent the war in port, too valuable to be risked in actual combat. More to the point, use of these forces endangered too much else. Neither Britain nor Germany were willing to hazard the consequences of a decisive naval engagement. It is possible that automation will create similar forces that are too critical to the survival of the state to risk in open combat. Ironically, this might protect civilian populations at the cost of forcing a return to human combatants, or the situation might deter war all together.

 

Still, contemplation of this scenario also requires considering other information. It was the British blockade that eventually starved Germany into submission, even as combat in the trenches continued to draw blood on both sides. Had adversaries acknowledged this, they might have saved innumerable lives by settling their differences at sea. Yet, the outcome of a naval contest appeared all too predictable. The greatest human tragedy in the Great War was experienced where the outcome of the contest was most in doubt, on the Western front, leaving the greatest concentration of military capital dormant. Military automation advantages societies by lowering costs and increasing influence, but it does not necessarily address the origins of war, which lie in the imperative of imposing and enduring costs to reveal relative capability or resolve.

 

Conclusion

 

The automation of war is just a small part of a much broader process involving the replacement of human labour in all sorts of productive processes. Conflict is of course different from production, as politics differs from economics. However, all production involves inputs of factors that are needed in different combinations and qualities at different points in history. Changing the mix of factor inputs to war means changing the nature of warfare and potentially altering who wins, and how.

 

The purely military consequences of military automation have been considered for some time. Technology and war has been a topic of interest for decades if not generations. On the other hand, relatively little attention has been devoted to understanding how technological change is likely to alter political motives and practices in deciding to exercise force. History suggests in fact that technology does as much (or more) to alter the political calculus of force than in changing how generals deploy armies. Examples abound. Vauban was both a town planner and an expert on siege warfare. Napoleon is known as a great general but is also remembered for political reforms that allowed him to field a new type of army. New technology in the nineteenth century changed warfare, and politics, long before either politicians or military leaders acknowledged these transformations.

 

I have argued that the automation of war will have counterintuitive effects. Rather than making war costless, automation will make it advantageous for commanders to target civilians, much as investments in long-range ‘precision’ bombing led to mass civilian casualties in World War II. Conversely, robot armies also make it easier for leaders to contemplate much more frequent uses of force at lower intensity levels. Modest uses of remotely piloted vehicles, designed to erode power relations rather than suddenly alter them ”grey zone conflicts” may become more-or-less continuous practice among rivals, just as cruise missiles, drones and cyberwar have stretched the definitional boundaries of war. The risk is that minor skirmishes will escalate. In a process mirroring brinkmanship in the Cold War but at much lower intensity levels leaders may play chicken with the prospects of broader conflict, with one side usually, but not always, backing down.

 

Automation will also change where states and other actors fight. A low risk of casualties is already encouraging the use of RPVs in places that otherwise would have proven prohibitive or insufficiently central to the interests of Western powers. One of the things that appears to have ended European colonialism and limited interest in conquest in the latter half of the twentieth century was the high labour cost involved in military occupation. Suppressing populations is labour intensive. Heavy bombers and submarines are pretty poor at crowd control. It became efficient to ‘out source’ predation, relying on local dictators and demagogues to govern.

 

Military automation may thus revitalise occupation, particularly if there are other strategic or economic benefits to direct physical control of territory, such as plundering resources or denying assets to competitors in a complex, multi-polar world. Cheap automated ground systems could patrol and conduct the other routine activities of occupation at a much lower cost than reliable human occupiers. Already, Israel has deployed remotely controlled combat systems for patrolling its borders. Automated combat systems may even allow states to ‘dial in’ occupation from afar.

 

Finally, automation will change the balance between populations and power. Nations with better factories have long had an advantage in modern warfare. If force can be achieved through machines making machines, then the relationship between labour and victory will be largely severed. While highly speculative, this could create an impetus in tension with the shift in modern times towards popular rule. Democracy is at least partly the product of the growing difficulty of coercing mass populations in contrast to the low cost of governing by consent. Practical liberal government also reflects a certain congruence of interests between masses and elites. The holders of the means of production still need labour to assist them. Paying workers and giving them the vote are two sides of the same coin. Military automation at once lowers the cost of coercion increases the distance between the interests of elites and ordinary citizens and reduces the utility of cultivating loyalty among the population. If it is cheaper to compel the masses in a foreign land, then it is also cheaper to use machines to repress at home. The recent rise, or perhaps recovery, of populism in the West implies that tensions between elites and masses are growing as machines drive a wedge between ever more skilled and unskilled productive labour. What has been missing is a way for elites to impose their will against popular opposition. Military automation may well provide this means.

 
반응형

관련글 더보기