CC image courtesy of Francisco Gonzalez
O the Humanity: The Law of Killer Robots
By Jordan Calazan Manalastas*
When considering the vice or virtue of various instruments of death, I like to recall Christopher Hitchens’ words near the start of our adventure in Afghanistan: “Cluster bombs are perhaps not good in themselves, but when they are dropped on identifiable concentrations of Taliban troops, they do have a heartening effect.” Morbid though those words might be, the statement sums up a simple, incontrovertible truth: a bomb by any other means would be just as bloody.
All of this, of course, came before the Predator drone began to occupy its singular position in our legal and military consciousness. There is something uniquely unsettling about waging war by remote control from a cockpit in Nevada, and that unease is only heightened by the inevitable stumbling over sticky questions like “due process” and “human rights.”
But at the most fundamental level, this should not be disturbing; after all, there is nothing new under the sun. A target is a target, and a body is a body. But can the same still be said of fully autonomous weapon systems—a rather technocratic euphemism for killer robots? Though history seems to slog along more slowly than Terminator suggests, the question is not unthinkable. Recently, Chatham House, a British international affairs think tank, convened a conference to debate the legal, political and moral implications of robotic warfare. If even the world’s current piloted drones can cause controversy, then surely unmanned killers won’t be welcomed with the selfsame zest that one’s inner techno-fetishist might wish.
One way in which the revulsion to drones infects our thoughts on killer robots is the issue of accountability: can we trust the machines to do our killing for us? This is not as paranoid as it may be seem. It has become increasingly difficult for the Obama administration to shrug off as “collateral damage” the tremendous toll on civilian life its drone strikes have inflicted—and rightly so. And autonomous weaponry, even more worryingly, would cede the single element that has governed how we kill throughout our history: human discretion. This usurpation has not gone unnoticed. Human Rights Watch, for example, doubts whether machines could kill discriminately and proportionately enough to satisfy international humanitarian law. Human Rights Watch has also vocally questioned the legal compliance of the U.S. drone program; how much better could robots be expected to fare where humans, putatively, have failed? This is a serious and open question.
Charles Blanchard, General Counsel of the Air Force and a speaker at the Chatham House conference, has tried to mollify such qualms by arguing that “a robotic weapon that cannot meet international norms is unlikely to have a military advantage on the battlefield.” It would appear that there is thus a happy confluence of humanitarian law and military interest in making the world safe from indiscriminate killing machines. Skeptics like yours truly may not be thoroughly reassured, especially since the “military advantage” of the partially analogous drone strikes themselves is debatable. If the strategic necessity or tactical advantage of drones is illusory—and yet they still continue—then Blanchard’s claim collapses because the “military advantage” that is the alleged end goal of compliance with international norms is an irrelevant consideration in the drone calculus. But alongside this strand of thought lurks the general suspicion that we owe much of our mindless violence and unnecessary casualties to the fact that humans wield the weapons.
A more sinister problem may be structural. The U.S. drone program is notoriously secretive and inscrutable; one must grapple with the bewildering fact that our government can kill, at the touch of a button, its very own citizens—without any outside scrutiny. How much more open to abuse could a weapon be which pulled its own trigger when paired with the lack of transparency that we already face? How much “due process” might a machine respect? And would we risk enabling what Human Rights Watch called a “robotic arms race”?
A deeper objection, noted by Blanchard, is that autonomous weapons are singularly and inherently repugnant because for the first time in human history, combatants may be deprived of the dignity of being killed by a fellow human being. There is nothing strange about singling out a particular weapon as anathema. Chemical weapons and land mines—hallmarks of tyrants and brutes—have also been reviled by international norms and instruments. With those weapons, however, the problem seems to be either their utter lack of discrimination in afflicting civilian populations or the immeasurable suffering caused by their use—as can be seen in Halabja or eastern Burma. A critic of killer robots, on the other hand, might assume for discussion’s sake that the killer robot is an entirely precise weapon, and still object that only humans should be in the business of killing humans.
Notice here a rather insidious implication—would warfare truly be more tolerable by making it more personal? Materialists like yours truly see no difference between the human and the mechanical decisions to take a life. Killing is barbaric and undignified as it is; so long as one must do it, the cleaner and more removed, the better.
The best objection to this admittedly cold calculation may be that robotic killing desensitizes us to the barbarism of war—one becomes more trigger-happy the less triggers one must personally pull. And this, in turn, must be squared against the equally compelling claim that precision and impartiality are crucial on the battlefield. By way of compromise, perhaps we can all agree that until such time that both the technology of killer robots, and the governing structures that put them into play are trustworthy and transparent, there ought to be a moratorium on their deployment. Until then, I for one am squeamish about our new robotic overlords.
Citation: Jordan Calazan Manalastas, O the Humanity: The Law of Killer Robots, 2 Cornell Int’l L.J. Online 67 (2014).
* Jordan Calazan Manalastas is a J.D. candidate at Cornell Law School, where he is the Cornell International Law Journal’s Associate on Middle Eastern Affairs and a research associate for the Legal Information Institute. He holds an A.B. in political theory from the University of California, Los Angeles.
 Christopher Hitchens, It’s a Good Time for War, The Boston Globe (Sept. 8, 2002), https://www.boston.com/news/packages/sept11/anniversary/globe_stories/090802_hitchens_entire.htm.
 For a personal account of drone operation, see Matthew Power, Confessions of a Drone Warrior, GQ (Oct. 23, 2013), http://www.gq.com/news-politics/big-issues/201311/drone-uav-pilot-assassination.
 For an analysis of some of the problematic features of U.S. targeted killing policy, see Jordan Calazan Manalastas, Through a Drone Darkly, 1 Cornell Int’l L.J. Online 116 (2013).
 Human Rights Watch uses “killer robot” to describe autonomous weapons. See Human Rights Watch, Losing Humanity: The Case Against Killer Robots (2012), available at http://www.hrw.org/reports/2012/11/19/losing-humanity-0.
 See Autonomous Military Technologies: Policy and Governance for Next Generation Defence Systems, Chatham House, http://www.chathamhouse.org/Autonomous.
 See Human Rights Watch, Between a Drone and Al-Qaeda: The Civilian Cost of U.S. Targeted Killings in Yemen (2013), available at http://www.hrw.org/sites/default/files/reports/yemen1013_ForUpload.pdf; Amnesty International, Will I Be Next?: U.S. Drone Strikes in Pakistan (2013), available at http://www.amnestyusa.org/sites/default/files/asa330132013en.pdf.
 See Q&A on Fully Autonomous Weapons, Human Rights Watch, Oct. 21, 2013, http://www.hrw.org/news/2013/10/21/qa-fully-autonomous-weapons [hereinafter Weapons].
 See Human Rights Watch, supra note 6.
 Charles Blanchard, Autonomous Weapons: Is an Arms Race Really a Threat?, Lawfare (Feb. 23, 2014), http://www.lawfareblog.com/2014/02/autonomous-weapons-is-an-arms-race-really-a-threat/.
 See, e.g., Akbar Ahmed, The Thistle and the Drone: How America’s War on Terror Became a Global War on Tribal Islam (2013) (arguing that drone strikes against Islamic tribesmen necessitate, by virtue of the victim’s cultural code of honor and vengeance, a retaliatory response; the militant Islamic groups that arose to provide an avenue for tribesmen to pursue these cultural prerogatives are a distracting ideological mask that hides the true, socio-cultural explanation for the continuing cycles of violence); but see Christopher Swift, The Drone Blowback Fallacy, Foreign Affairs (July 1, 2012), http://www.foreignaffairs.com/articles/137760/christopher-swift/the-drone-blowback-fallacy.
 See Al-Aulaqi v. Obama, 727 F. Supp. 2d 1 (D.D.C. 2010) (holding that the propriety of targeting a U.S. citizen was a non-justiciable question).
 See Weapons, supra note 7.
 See Charles Blanchard, Autonomous Weapons at Chatham House: It’s Bentham Versus Kant, Opinio Juris (Mar. 4, 2014), http://opiniojuris.org/2014/03/04/guest-post-blanchard-autonomous-weapons-chatham-house-bentham-versus-kant/.
 See Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on Their Destruction, Jan. 13, 1993, 1974 U.N.T.S. 45, 32 I.L.M. 800 (1993).
 See Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on Their Destruction, Sept. 18, 1997, 36 I.L.M. 1507 (1997) (entered into force Mar. 1, 1999).
 See Thousands Die in Halabja Gas Attack, BBC News (Mar. 16, 1988) http://news.bbc.co.uk/onthisday/hi/dates/stories/march/16/newsid_4304000/4304853.stm.
 See Karen Human Rights Group, Uncertain Ground: Landmines in Eastern Burma (2012), available at http://www.khrg.org/2012/05/uncertain-ground-landmines-eastern-burma.
 See Blanchard, supra note 13 (noting that the philosophical objection is less concerned with a utilitarian calculus and more with “human dignity”).