Should killer robots be banned?

Andrew Gibson asks whether, as the defence industry pumps more funds into robotics, we can ever legitimately use unmanned armed robots.

Andrew Gibson is a freelance journalist interested in military robotics, arms control (particularly nuclear), civil wars and politics

South Korea has done a rhetorical U-turn on whether it will deploy fully autonomous robots along the border with North Korea. SGR-A1 sentry robots, which use infra-red cameras, motion detection, communication equipment for the exchange of army codes and are armed with a swivel-mounted K-3 rifles, have been developed and trialled along the misnomic demilitarised zone (DMZ).

Samsung Techwin, the system’s manufacturer, openly advertised the SGR-A1 robot’s autonomous nature on its release in 2006 – it can detect, question and fire upon an intruder without a human operator. However, most people involved with the project have since stressed that decisions on lethality will be taken by soldiers and references to the SGR-A1’s autonomous settings have also been removed from Samsung’s website.

Israel have been experimenting with a similar system along the border with Gaza, but they have been more emphatic about its ‘man-in-the-loop’ status. Clearly, anxiety about deploying autonomous killing machines is a result of political and legal, rather than technological, constraints. Whilst autonomous movement is becoming common in military vehicles, autonomous target acquisition is rarely boasted.

The question is, as the defence industry pumps more funds into robotics, can we ever legitimately use unmanned armed robots?

Autonomous armed robots strike at the heart of International Humanitarian Law (IHL). Firstly, without a human operator, nobody can be held responsible should a war crime be committed. This is the view of the International Committee on Robot Arms Control (ICRAC), who are campaigning for a blanket ban. Secondly, the Fourth Geneva Convention requires warring parties to distinguish between combatants and non-combatants.

From a technological perspective, this is simply not possible. Artificial intelligence, shape recognition (‘hands up, this is a war’) and Friend-Foe Interrogation (FFI) programmes are so rudimentary that modern combat robots are little more than advanced land mines. Today or tomorrow, no autonomous robot will have sufficient perspective to operate within IHL.

Some roboticists believe these problems are surmountable, for technological or doctrinal reasons. John Canning, a US Navy engineer, has proposed a more conservative doctrine that will sidestep some of the legal issues raised by autonomy. Canning proposes autonomous weapons systems that only target weapons or enemy vehicles, extending the concept behind the Patriot Air and Missile Defence System to the wider battlefied.

Canning defines anyone with a weapon as a combatant. He believes possession of a weapon could be roughly gauged through infra-red, shape recognition and similar techniques in the near future and large military vehicles already exchange electronic codes and radio messages. He proposes an automated process of interrogation, similar to the South Korean system, to allow combatants the opportunity to surrender.

Canning argues around the US Rules of Engagement (RoE), which normally disallow fighting in cemeteries, mosques and hospitals: places that are not necessarily on maps and whose engagement necessitates a level of human judgement. He argues that it is the responsibility of the enemy to mark these areas prior to the conflict and therefore the programmers or operators would not be responsible should an autonomous system fire on sensitive sites.

Ronald Arkin, a US Army-funded roboticist, has been developing prototype software for use if and when recognition technology is able to discriminate to a decent level. Arkin’s designs allow the robot to reason within IHL: essentially comparing its pre-programmed mission with facts on the ground, IHL and the RoE. Arkin argues autonomous robots have the potential to act more ethically than soldiers, as they have no desire for self-preservation and no tendency to scenario fulfilment.

However, most of Arkin’s examples of how armed robots could be used take place in designated ‘kill zones’ (such as heavily-leafleted villages and South Korea’s DMZ), where intruders are assumed to be enemy soldiers. He is keen to employ acoustic sensors so robots can automatically return fire, but whether somebody firing on a robot should automatically be considered an active combatant is debatable.

Arkin’s work relies not just on technological optimism but on a bizarre interpretation of the Nuremburg trials. He argues that IHL already provides for morality, such that any legal request is by definition moral. Therefore, soldiers and robots have no responsibility to question orders they believe to be legal. According to this interpretation, the blind obedience of robots makes them more ethical than the average soldier.

The problem with Canning and Arkin is that they give false hope: they encourage investment in autonomous combat robots by suggesting the legal problems will be fixed in some technological hereafter. Arkin’s presumption that any complex situation can be formalised is untrue and Canning’s belief in the reliable exchange of codes between military vehicles will result in more tragedies like the downing of Iran Air Flight 655 in 1988.

Furthermore, the claim a robot will be able to discern whether a wounded soldier is holding a weapon or lying next to it is so far from the current state of robotics, it is like offering to make a biological weapon that only targets bad people. Even if robotics develops to the unlikely state of Arkin’s fantasy, it would be unfair to place legal responsibility on computer programmers or commanding officers should malfunctions or poor decisions occur. As Jutta Weber has repeatedly argued, autonomy is the point.

Intelligent or unintelligent combat robots challenge IHL in myriad ways. However, there is industrial momentum behind them, matched by government support. There also appears to be genuine confusion in some quarters about whether, in certain circumstances, the use of these systems would be illegal. Some have called to regulate the robotics industry, akin to the administration of the Chemical Weapons Convention (CWC), to halt these developments.

This is unrealistic. Autonomy is as much a software as a hardware problem and verification is very difficult. Also, robotics is an especially dual-use industry: every advance in industrial and medical robotics contributes to military robotics.

Nevertheless, some kind of international treaty or agreement would provide legal clarity: it would signal to the military and industrial sectors what is expected of them. The Ottawa Treaty on land mines, the CWC and Biological Weapons Convention are all premised on the Fourth Geneva Convention but are still helpful in articulating why those weapons are wrong and to assert consensus. International treaties are the highest form of eyebrow raising; sometimes, that is all we can do.

25 Responses to “Should killer robots be banned?”

  1. Andrew Gibson

    Chaps- I thought the comment about legal responsibility would cause a minor revolt. I suppose it is poorly worded. Perhaps ‘there would not be a chain of legal responsibility I would find satisfying’ would be better.
    In general, I agree with Owen and don’t think these systems should be deployed/developed in the first place. But, if they were (and operated conservatively as Arkin proposes), I still feel there would be a certain unfairness in prosecuting the programmer for glitches. It is like prosecuting an enraged soldier’s parents for raising them badly (ie. they no longer control the solder but are, in some way, responsible). However, I do feel I need to iron out my thinking here, so thanks for the criticism.
    Incidentally, Arkin does not claim that autonomous robots will be 100% war crime free; rather, he thinks they will do better than humans (who do v. badly).

    Rob- I object to the word ‘defective’ in your comment but I realise it is in response to my poorly expressed comment on responsibility, which falsely implies that you can have a non-defective autonomous armed robot. I think your second comment is gives succour to the techno-fetishists I oppose and is perhaps too idealised. But thanks for engaging with the question seriously, which many people do not.

  2. jonathan

    High level claims about the robots effectiveness may not match a lower level glitch in the programming which under unforseen conditions may lead to a system failure and a chaotic outcome. Just compare the billions of person hours that have gone into developing present computer power that can be purchased in any high street shop and the millions of purchasers who have enabled shortcomings to be revealed. Just how sure are the programmers they have thought of all possibilities, without extensive real time testing? I suppose the robot killing machine analogy would be to the effect that they would become over time more specific and less prone to eliminate misidentified targets (ie., innocent people).The exclamation “Sorry about that we didn’t intend that, I’m sure we can improve things next time.” does not sound a good rationale for a defence against unintended killings.

  3. Andrew Roche

    Just read: Should killer robots be banned? http://dlvr.it/7DPWP

  4. John Rentoul

    "Should killer robots be banned?" Left Foot Forward http://bit.ly/aJrA2b Probably best not to try to answer that one (via @Conorpope)

  5. North Korea: What Kim Jong-un should do | Left Foot Forward

    […] Should killer robots be banned? – Andrew Gibson, October 16th 2010 Share | Permalink | Leave a comment […]

Comments are closed.