IHL Challenges Series - Part III on New Technologies

In our third post on IHL and New Technologies, Michael N. Schmitt examines the relevance and adaptability of the rules of war when it comes to modern weapons. He says the law is alive and well, and cautions against taking a "sky is falling" approach towards the development of new battlefield technologies. This post is part of our regular IHL Challenges Series. Mr. Schmitt is Chairman of the International Law Department at the US Naval War College and a Senior Fellow with the NATO Cooperative Cyber Defence Centre of Excellence.  The views expressed in this piece are those of the author in his personal capacity.

It is becoming stylish to bemoan the “insufficiency” of international humanitarian law (IHL) with respect to the new technologies of warfare.  Indeed, at several recent conferences, I have been left with the sense that anxiety about IHL’s suitability for new weapons is in inverse relationship to one’s understanding of that body of law.  But ignorance is not bliss when humanitarian law is at issue.  IHL matters in very real ways in the modern battlespace – to both the civilian population and combatants.  Proclaiming the inadequacy of IHL without understanding its inherent adaptability is simply irresponsible.

In fact, IHL demonstrated its impressive adaptability to new technologies in the 20th Century, and so it shall again in the 21st.  For instance, the development of tanks, machine guns and airplanes initially generated great concern, but over time IHL proved capable of regulating these means of warfare even though no specific norms emerged to govern their use.  Similarly, uneasiness with over-the-horizon weapons and beyond visual range (BVR) engagements surfaced in the latter decades of the century.  Again, consistent application of the basic principles and rules of IHL (such as distinction and the requirement to take precautions in attack) revealed the inherent virility of IHL in meeting the demands of novel technology.  And in this century, the “International Group of Experts” considering the applicability of IHL to cyber operations as part of the Tallinn Manual project concluded, after three years of rigorous examination, that the legal regime was fully applicable and generally adaptable to this form of warfare. (1)  Yet, the on-going debates over remotely piloted aircraft (RPA, or so-called “drones”) and autonomous weapon systems continue to exemplify a persistent propensity on the part of some commentators to embrace a “sky is falling” mentality whenever new weapon systems are developed and fielded.  Their concerns are, in my view, exaggerated and counterproductive.  

The key to a mature assessment of IHL’s responsiveness to emergent technologies is interpretation of the law in light of both the context in which it will be applied and the underlying object and purpose of the principle or rule in question.  To begin with, IHL is a dynamic body of law that is intentionally designed to retain its valence when new weapons appear on the battlefield, a fact rendered undeniable by Article 36 of Additional Protocol I.  That article, which even non-Party States recognize as customary (at least with respect to means of warfare), requires weapon reviews of all new systems.  Obviously, such systems are assessed against the extant law.  Lest there be any doubt as to IHL’s survivability in the face of new weapon systems, the International Court of Justice confirmed such applicability in its advisory opinion on nuclear weapons.(2)  Accordingly, for instance, suggestions that cyber means and methods of warfare exist in an extra-normative space beyond the reach of IHL are completely counter-normative.

 

Of course, application of the law requires interpretation in the context in which it is to be applied.  As an example, the absence of “eyes on target” during a BVR engagement necessitates the use of other means of identifying the target and calculating expected collateral damage.  Similarly, hackers targeting military assets during an armed conflict in order to disrupt operations are directly participating in hostilities, and thus targetable while participating, even though they are self-evidently not the type of individuals with whom the drafters of the Additional Protocols were concerned.  And commanders who decide to launch autonomous weapon systems into a battlespace will be responsible for all of the reasonably foreseeable consequences of those operations, just as they are when approving operations involving human-operated or human-supervised systems.

At times, IHL must be further interpreted in light of its object and purpose.  Object and purpose exist at two levels.  In a general sense, international humanitarian law reflects a balancing between military necessity and humanitarian considerations that States have “agreed” upon either through treaty or through practice that has crystallized into custom.(3)  Applying existing norms to new weapons in a fashion that runs counter to this foundational balancing (in either direction) is flawed.

IHL rules also have specific objects and purposes.  For instance, international humanitarian law prohibits attacks against protected persons and objects in order to shield them from the harm incident to combat; they are, after all, uninvolved in the on-going hostilities.  Article 49 of Additional Protocol I defines attacks as “acts of violence against the adversary whether in offense or defense.”  However, cyber operations are not violent in the sense of releasing kinetic force, as most weapons were at the time the article was drafted.  The Tallinn Manual nonetheless interprets violent acts as encompassing those acts that, although not violent in themselves, produce violent consequences, especially injury to, or death of, persons, and damage to, or destruction of, objects.  Doing so was consistent with the objects and purposes of the respective rules. Moreover, both their general and specific objects and purposes led the majority of the experts involved in the project to extend the interpretation of “damage” to situations in which interference with the functionality of a system required some form of repair. 

Interestingly, application of existing IHL to new systems can dramatically affect the nature of warfare.  Of particular note in this regard is the rule set forth in Article 57 (and customary law) that requires parties to the conflict to “[t]ake all feasible precautions in the choice of means and methods of attack with a view to avoiding, and in any event to minimizing, incidental loss of civilian life, injury to civilians and damage to civilian objects.” RPA’s are equipped with a sophisticated sensor suite and have the ability to loiter over a target for extended periods.  As a result, they can often strike a target with far less risk of collateral damage than a manned aircraft, which is usually flown by a single task-saturated individual, who may be distracted by the risk of the operation, and whose window within which to strike the target is narrow due to fuel limitations or target area defenses.  In such a situation, the rule would require use of an RPA to strike the target, so long as the system was both available and militarily feasible to employ in the circumstances.  The same will likely sometimes hold true in the case of future autonomous weapon systems.

I do not mean to suggest that IHL clearly and fully answers all questions raised by new weaponry.  On the contrary, I share many of the same concerns expressed in the ICRC position paper on Means and Methods of Warfare that has been posted on this site, as well those highlighted by Dr. Cordula Droege during her interview.  In the cyber arena, important discussions are taking place among experts over such matters as reverberating effects during cyber attacks on networked target systems, especially those that are dual use; the difficulty of attribution in cyberspace when counter-attacking; the scope of the term “attack”; operationalization of the requirement to take precautions during cyber attacks, as well as that requiring passive precautions by the party under attack; cyber perfidy; the duty to respect protective emblems in the context of cyber communication; the meaning of “organized armed group” when applied to virtual hacker groups; and the geography of cyber armed conflict.  Autonomous weapon system present questions such as how a system can calculate anticipated military advantage in order to comply with the principle of proportionality, and the accountability for IHL violations of individuals ranging from manufacturers and programmers to commanders who decide to employ them in particular environments. I agree entirely with the ICRC that it will take time and careful reflection to conclusively answer these questions.  But, in my estimation, most of the answers will be found through the tried and true methodologies for interpreting IHL.

Finally, we must not ignore the fact that the international community’s values evolve over time.  As they do, we can expect IHL to be reinterpreted.  To take one example, the reliance of societies on cyberspace continues to grow. I therefore expect the notion of attack to be reinterpreted in order to afford greater protection to such activities and the entities, like data, on which they rely.  Beyond reinterpretation, States may craft new law to prohibit weapon systems that (although compliant with existing IHL) they find especially noxious in light of their contemporary values.  Indeed, while I have argued forcefully that a ban on certain autonomous weapon systems is premature (and that any assertion that they or their use are definitively unlawful is unsupportable), (4) I can imagine that certain States may someday decide – for policy, political, operational, or humanitarian reasons – to limit the use of such weapons on the battlefield through new treaty law, as has been done for anti-personnel landlines, booby-traps, permanently blinding lasers, incendiary weapons, and cluster weapons.  As values evolve, so too must the law.

In sum, IHL is alive and well.  It is an inherently flexible body of law, one well-suited to meet the challenges of nascent battlefield technologies.  Those who would sell it short do a disservice both to humanitarian considerations and to the conduct of effective military operations. However, recognizing that IHL is not a perfect fit vis-à-vis every new weapon system, I join the ICRC in calling for further informed examination of the issues the systems raise.

Footnotes:

 

(1) TALLINN MANUAL ON THE INTERNATIONAL LAW APPLICABLE TO CYBER WARFARE r. 48 (Michael N. Schmitt gen. ed, 2013).

(2) Legality of the Threat or Use of Nuclear Weapons, Advisory Opinion, 1996 I.C.J. 226, para 86 (July 8)

(3) My views on this subject have been set forth in Michael N. Schmitt, Military Necessity and Humanity in International Humanitarian Law: Preserving the Delicate Balance, 50 VA. J. INTL. L. 795 (2010).

(4) See Michael N. Schmitt & Jeffrey S. Thurnher, "Out of the Loop": Autonomous Weapon Systems and the Law of Armed Conflict, 4 HARV. NATL. SEC. J. 231 (2013) Michael N. Schmitt, Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics, HARV. NATL SEC. J. FEATURES (2013)  

 

 

Previous posts in the IHL And The Challenges Of Contemporary Armed Conflicts series:

Introduction by Knut Doermann, Head of the ICRC Legal Division.

Typology of conflicts, in five parts.

IHL and Terrorism, in five parts.

International Law And The Challenges Of Contemporary Armed Conflicts, an ICRC Report presented at the 31st International Conference of the Red Cross And Red Crescent, Geneva, 2011.