Autonomous Legal Reasoning? Legal and Ethical Issues in the Technologies of Conflict

Afghan residents look at a robot during a road clearance patrol in logar providence.  ©umit bektas, reuters.

Afghan residents look at a robot during a road clearance patrol in logar providence.  ©umit bektas, reuters.

On October 23, 2015, the International Committee of the Red Cross (ICRC) and the Temple University School of Law held an invitation-only, one day workshop to discuss how the development of autonomous systems impacts questions of international humanitarian law (IHL).  Militaries have emerged with new technologies in recent years, including cyber operations, remotely piloted vehicles, automated defensive weapons, as well as the potential future development of fully autonomous lethal weapons. All these technologies share similar characteristics, most importantly their ability to operate in the absence of direct human control.  This workshop sought to engender a cross-cutting dialogue by bringing together experts with different backgrounds.  The participants (see here for the list) combined an array of professional experiences, including ICRC and U.S. government officials, experts in public policy, as well as academics from various disciplines including law, international relations, and philosophy. 

The workshop was conducted under Chatham House Rules, leading to a fluid and in-depth conversation.   Summaries of the various sessions are offered below.  In addition, select discussion papers produced for the workshop will be published in a forthcoming volume of the Temple International and Comparative Law Journal devoted to this topic. 

Session 1: Defining the terrain – What do we mean by “autonomy” in the IHL context? What capacities warrant IHL’s attention?

This session asked how we should define “autonomous systems.”  The conversation began with a discussion of the technical capacities these systems have, including their varying relationship(s) to human actors (i.e., contrasting systems where a human directs key operations versus one where a system selects targets on its own subject to a human override that could terminate or alter its operations). 

The session emphasized that the definition of autonomous systems will have far-reaching implications for how such systems are regulated, including issues of accountability. One participant suggested that we should use what the general public understands by the term “autonomous” in attempting to define it for more specific purposes, including IHL. Another participant emphasized the need to have varying definitions of “autonomous systems” (along with varying regulations and accountability regimes) depending on the level of human involvement in their operations.

A key part of the session (including a lively discussion) asked whether a system is autonomous because it acts with a purpose (on the theory that purposive actions could be tied to moral responsibility).  Such purposive activity was contrasted with systems programmed to operate according to some ex-ante “checklist” of rules.  Participants discussed whether the line for autonomy involved pattern recognition or even self-awareness. The participants agreed that as technology evolves and progresses, such definitional questions may become increasingly more important.

The conversation on definitions of autonomy was not limited to conventional weapons.  The group also asked whether (and which) cyber operations might qualify as “autonomous systems.”  As an example, participants debated the status of the Stuxnet worm, including questions of whether its creators had successfully constrained its operations or if its global spread (albeit without the payload that effected the Natanz nuclear facility) constituted evidence of some autonomy.

The session wrapped up with questions about the moral culpability and ethical accountability of various actors involved with autonomous systems. When the actions of an autonomous system produce unfavorable results, or even violate IHL, who bears responsibility?  Is it the system’s designer? Its programmers?  Those who deploy it? Or, could the system itself be held culpable?  A few participants insisted that the person who programs the system must be held accountable in all cases.  Others noted the possibility of holding multiple actors accountable for the actions of a single system.  The session concluded with a cautionary question of whether definitional debates might prove counterproductive to broader and deeper conversations about the ethical and legal aspects of the actual systems in operation or under development in the near term. 

Session 2: Unpacking the Ethical Dilemmas – Whether, when and how should we deploy autonomous weapon systems?

The second session included a robust discussion of moral accountability, beginning with one participant arguing that those who develop or code autonomous weapons are not morally accountable for their actual deployment, because of all the other, intervening factors that may arise.  A comparison was made to the relationship and responsibility of parents raising their children, where the parents are not held accountable for their child’s wrongdoing because of all the other influences on the child that might account for the wrongful behavior.  Other participants challenged the analogy on the grounds that parents may, in fact, be responsible for their children’s behavior or that the better analogy is the wrongful behavior of a pet, the actions of which are attributable to its owner. 

Legal examples of corporate responsibility were also offered, including instances where corporate officers may be liable for failing to supervise a business’s operations.  Whether a similar “failure to supervise” standard should apply to those with responsibility for autonomous systems was subject to some debate, with a few participants noting that such a standard would not accord with those employed for other weapon systems.  Many non-autonomous weapons today are used in ways that cause harm or violate IHL, but the law does not hold those who designed or produced them liable. Rather, IHL attributes responsibility to those who use them in ways that violated IHL.  As a philosophical matter, the question was framed more generally in terms of building laws through morals versus having morals interpret the law?

Returning to the military context, the discussion of responsibility was paired with situational awareness; i.e., the fact that soldiers are expected to follow orders even if they do not understand the full situation.  This led to the question of foreseeability. If someone is accountable for the acts of another individual – e.g., a commander for his troops – where is the line for breaking that line of accountability based on acts by the troops that are unforeseeable or unanticipated?

A separate line of questioning involved weapons reviews, and the duty under IHL to test autonomous systems to ensure that they are not indiscriminate nor the cause of unnecessary suffering.  The participants discussed the idea of reasonableness in determining what these reviews should regard as foreseeable unlawful results of deploying the weapon system.  Participants discussed that these reviews did not require autonomous systems to do no harm, nor even to avoid killing innocents (indeed, IHL allows civilian causalities where they occur consistent with principles of distinction, proportionality, and precautions).  As one participant noted, ultimately the question of such weapon reviews is one of risk tolerance – how much of a risk of unlawful use or deployment of autonomous systems do its creators want to bear?  As of now, conventional wisdom seems to support a very low risk that these systems will operate in ways that cause unanticipated harms. 

In general, however, the discussion emphasized that humans are actually bad at risk assessment.  As such, the question was raised whether humans doing weapons reviews for autonomous systems actually produced better results than designing systems capable of self-assessment of the risks they pose.  One participant emphasized the need to inquire beyond personal responsibility given that there may be a moral hazard in looking at autonomous weapons in isolation rather than undertaking a full consideration of the context in which such systems might operate.  Thus, someone made the point of how hundreds of people could have been saved by an autonomous weapon in Darfur even with the United States refusing to send in its troops.  Thus, there may be under-acknowledged benefits to using these systems that need to be part of the discourse.  This led to additional discussion of whether it is ever really possible to distinguish “offensive” and “defensive” autonomous systems where an offensive system might be deployed ala the Darfur example for a defensive purpose or vice versa.

Sessions 3 & 4: Will IHL rule the technology? Or, will the technology rule IHL?

These two sessions considered the lex lata (i.e., existing IHL) and the lex ferenda (i.e., what IHL “should” be) in the context of autonomous weapon systems.   Topics included the idea of having a requirement of “meaningful human control” as well as the potential need to have new IHL for autonomous systems, particularly in the cyber context. 

Participants discussed whether autonomous systems are sufficiently analogous to prior weapons systems, since the latter were regulated because of their potential for mass casualties, whereas the novelty of autonomous systems rests in their subtlety and unimaginable precision.  Interestingly, the participants agreed that corruptibility—the possibility that an autonomous weapon could have other, improper uses—should not be a consideration when reviewing and judging the compatibility of that weapon with IHL.  The participants concluded that IHL requires the system be evaluated only in terms of its intended uses.  Several participants noted, moreover, that despite IHL’s requirement that autonomous weapon systems receive a legal review as “weapons,” a majority of countries did not do so.  In contrast, the United States was cited as a country that increasingly employed such reviews.

Much discussion centered on what IHL should say about regulating autonomous weapons, including cyber-weapons. Some participants cautioned that IHL should not be limited to only thinking about these weapons in terms of arms control or international criminal law.   On cyberwarfare, however, a number of participants noted a reluctance of States to pursue a new treaty or even to concede that any particular use of a cyber capability would be per se unlawful. The fact that many nation States employ cyber operations for espionage purposes reinforced the idea that future legal regulation of these operations may be a difficult “sell.”  Similarly, the more undeveloped the specifics are on how IHL governs cyber operations, the more room nation States may have to utilize cyber means so long as they can avoid charges of blatant violations.  That said, the participants agreed that certain international norms regarding cyber warfare must be recognized, such as a ban on attacking hospitals.

The discussion concluded by circling back to definitional issues, with a number of participants calling for more efforts to define what IHL requires of an autonomous weapons systems and/or a cyber-related autonomous operation.  Participants recognized this as necessarily a line-drawing exercise that is only just beginning.   In the meantime, existing IHL and treaties continue to set a baseline of certain conduct that cannot be violated.   Moreover, the participants reflected on the need to improve the tools for holding countries accountable for the actions of their autonomous systems, including those in cyberspace.

Session 5: Accountability and Effectiveness

The last session focused on issues of accountability and the effectiveness of legal regulations over these new technologies.  The conversation began with a discussion of potential obstacles to ensuring accountability. For instance, the accountability of a particular state will be mostly based on that state’s understanding of its own obligations in the absence of some third-party dispute settlement.  This is particularly problematic when it comes to autonomous systems (with or without a cyber element) since there is currently a wide range of how different states view their IHL obligations. When states understand their own obligations and the obligations of other states differently, the participants acknowledged that will also divide States over who should be held accountable for specific outcomes.  

Another accountability obstacle that was recognized involves these systems’ complexity.  If a system is not thoroughly understood, it may be difficult to determine who should be at fault when it causes wrongful outcomes or behavior.  As in earlier sessions, participants discussed if IHL should hold the programmer who wrote the code that governs the system accountable? Or, should it be the military personnel who flips the ON switch? Without a thorough understanding of what the autonomous system is intended to do and its mechanisms for doing so, determining where such fault lines lie may prove problematic.

Accountability was also discussed in terms of the liability regime that is used. Under a strict liability regime, when something bad happens there will be liability, regardless of whether there was any negligent action. In contrast, under a negligence regime there will only be liability if it can be shown that the actor failed to meet a standard of care in light of the specific circumstances presented. During the discussions, it was argued that accountability could simply be a matter of adapting our pre-existing extensions of accountability – of which there are many – to the world of autonomous systems.

The discussion gradually shifted from issues of accountability for autonomous systems to a State’s accountability for illegal actions in the cyber realm.  The conversation flagged the difficulty of whether IHL could impose liability where a non-state actor comprised a single individual.  As currently formulated, a number of participants emphasized that IHL is not designed to deal with such individual actors, even if the technology might allow them to have sufficient force multipliers in terms of autonomous weapon systems to function in ways akin to those previously only available to armed groups. On the whole, however, the discussion raised serious questions about whether IHL should ever allow just one person to trigger an “armed conflict” through the use of autonomous weapon systems.

The day concluded with the participants uniformly acknowledging the utility of bringing together disparate actors, including those with expertise on non-IHL related technologies such as driver-less cars and automated financial systems, to the question of autonomy and lawyering in the laws of armed conflict.  A number of participants signaled an interest in continuing the conversation at a future date and location to be determined.  

Written by: Alexis Shaw, Katie Rabinowitz, Faith Maddox-Baldini, Sam Dordick, Seth Litwack, and Philip Jones 

This summary reflects the discussion that took place at the workshop. All views expressed do not necessarily reflect those of the ICRC.