Are lethal autonomous weapons (LAWS) good or bad? One perspective argues that such weapons protect humans and soldiers, the other that they are a danger for the humanitarian law, civilians and democracy. However, interestingly, since fully autonomous weapons do not exist yet, only the future will show us what will happen. Thus, the discussion about killer robots is a science fiction discourse which literally is about future technological implications, even though the course is set right now.
Thus the issue to ban LAWS is now a recurring topic at the Convention on Conventional Weapons, which will meet again on 11 April. Other current proceedings in the broader context of the campaign were the publishing of a new UN report at the Human Rights Council in Geneva this March stating that “autonomous weapons systems that require no meaningful human control should be prohibited” in contrast to previous reports stating that there should only be a moratorium on their development; and holding the second panel on the issue at this year’s World Economic Forum called “What if robots go to war?” featuring Senior Fellow at the VCDNP Angela Kane, BAE Systems chair Sir Roger Carr, artificial intelligence expert Stuart Russell and robot ethics expert Alan Winfield.
The science fiction linkage
The main arguments against the development of LAWS, which are mostly “Human-out-of-the-Loop-Weapons” being capable of selecting targets and delivering force without any human input or interaction, are that autonomous robots lack human judgment and the ability to understand context which is necessary to make complex ethical choices in battle, that they heighten the risk of death of civilians in armed conflict, that they are not compatible with international humanitarian law and that they mark an accountability gap as there is no clarity who would be legally responsible for a robot’s action. Although these arguments raise crucial questions and point directly to the problematic of LAWS, the whole killer-robots debate is a science-fiction narrative in the truest sense of the word, since this is what the debate in question describes: future implication of technology not yet developed – the same as any science-fiction book or movie, not to mention the fact that the term killer robot has gone popular because of such movies in the first place. There is almost no news-article on the issue without a picture of the dreadful terminator fletching his teeth. HRW in the frame of the Campaign to stop Killer Robots for example deliberately plays with such connotations and fears when selecting illustrations of robots-gone-mad as cover for the aforementioned report Losing Humanity and also the 2015 report “Mind the Gap - The Lack of Accountability for Killer Robots” when introducing a short explanatory movie with a reference to corresponding sci-fi-movies or when listing the premier of “Terminator: Genisys” in the official events calendar on the website.
But what is more to that is the fact that the aforementioned reports appear to be serious and “realistic” description of the plot of science-fiction-movies on that topic. The Losing Humanity report states that such weapons “could be developed within 20 to 30 years” corresponding very will to the time set of e.g. the movies “Robocop” (the new version from 2014) taking place in 2028, “I, Robot” (2004) taking place in 2035, or “Minority Report” (2002) taking place in 2054. Talking about Robocop, in the movie autonomous robots on the ground in war or post-war zones have already become reality. The beginning of the movie shows us impressively the deployment of war robots: Apparently the US has successfully taken the city of Teheran (a fictive setting which might raise several questions on its own) and secures the city now with LAWS. This is commented by a US military general in a TV show saying “we had Vietnam, we had Iraq, we had Afghanistan. Never again. From a military perspective, this has been invaluable. We can accomplish our objectives without any loss to American lives and I think the honest people over there really appreciate it,” which is responded by the presenter saying “And why wouldn't they? For the first time in their lives they get to watch their children grow up in an environment of safety and security.” This statement is led ad absurdum when a “killer robot” in the process of defending against suicide attackers defines a teenager – whom his mother desperately had tried to stop – with a kitchen knife as “threat” and fells him with a volley of machinegun-fire. What this scene shows us is obvious: LAWS are brainless killing machines thus they can neither cope with the requirement of proportionality nor with the protection and distinction of civilians given by international law. In the Losing Humanity report we find the following passage:
“An even more serious problem is that fully autonomous weapons would not possess human qualities necessary to assess an individual’s intentions, an assessment that is key to distinguishing targets. […] One way to determine intention is to understand an individual’s emotional state, something that can only be done if the soldier has emotions. […] For example, a frightened mother may run after her two children and yell at them to stop playing with toy guns near a soldier. A human soldier could identify with the mother’s fear and the children’s game and thus recognize their intention as harmless, while a fully autonomous weapon might see only a person running toward it and two armed individuals. The former would hold fire, and the latter might launch an attack.” (p. 31)
Even though in the movie scene the teenager has a real knife instead of “toy guns” the similarities are obvious since it is in fact the same story line. Just another example, in the movie, the CEO of the LAWS producing company, Mr. Sellars, tries to get a prohibition law named after a senator called Dreyfus changed in order to be able to deploy such robots also within the US. The Senate Hearing on the matter happens like this:
- Senator Dreyfus: “I don't care how sophisticated these machines are, Mr. Sellars. A machine does not know what it feels like to be human. It can't understand the value of human life. Why should it be allowed to take one? To legislate over life and death, we need people who understand right from wrong. What do your machines feel?”
- Sellars: “Well, they feel no anger. They feel no prejudice. They feel no fatigue, which makes them ideal for law enforcement. Putting these machines on the streets will save countless American lives.”
- Senator: “You're evading the question. […] I asked what do these machines feel. If one of them killed a child, what would it feel?”
-Senator: “And that's the problem. That's why 72 % of Americans will not stand for a robot pulling the trigger.”
No matter the fact that this scene is about the employment of robots within the country and not in war zones, the basic argument regarding emotions in the Losing Humanity report is the same:
“Whatever their military training, human soldiers retain the possibility of emotionally identifying with civilians, ‘an important part of the empathy that is central to compassion.’ Robots cannot identify with humans, which means that they are unable to show compassion, a powerful check on the willingness to kill. For example, a robot in a combat zone might shoot a child pointing a gun at it, which might be a lawful response but not necessarily the most ethical one. By contrast, even if not required under the law to do so, a human soldier might remember his or her children, hold fire, and seek a more merciful solution to the situation, such as trying to capture the child or advance in a different direction. Thus militaries that generally seek to minimize civilian casualties would find it more difficult to achieve that goal if they relied on emotionless robotic warriors.”(p.38)
Even though asking important and delicate questions, the basic argument of the campaigners against LAWS is that technological development is anti-humanist and thus not a chance but a threat leading us into a dystopian future. Therefore, we need a total ban on such weapons. This narrative is strengthened by consciously and unconsciously referring to science-fiction movie depictions. But we must not forget that this is not reality but a story about possible future implications of future developments. And of course, science-fiction movies show us the dystopian future because this is what they are made for: to attract people to the movie theaters to show them the newest special effects within extensive disaster.
So just like in many science-fiction-movies, the human being is characterized as essentially good, while technology is evil. In this sense, as the Losing Humanity Report tells us, “Human emotions, however, also provide one of the best safeguards against killing civilians, and a lack of emotion can make killing easier. […]” (p.37) and even more: “Rather than being understood as irrational influences and obstacles to reason, emotions should instead be viewed as central to restraint in war“ (p.39). But what about the most horrible war atrocities throughout history committed by humans who are aggressive, retaliatory, cruel and barbarous – features of the very fact of being a human and not a machine?
The campaigners do ask the right and the crucial questions, however we should not give in to technophobia. And to get the decision-makers and stake-holders on the table to start serious negotiations is always not so promising when demanding a certain desired outcome from the start. We should rather work on getting the important guys – high ranking officials and military from the relevant states such as the US, Israel, China, South Korea, European countries and others, scientists, researchers, developers and the CEOs of the leading companies – to talk with each other, instead of demanding a total ban from the very beginning. Without any doubt, the development of LAWS has fundamental ethical and strategic implications, but science-fiction-movies are just and action-loaded and effect-hasing dystopian version of the future not a logical consequence of using machines in war and combat zones.