Should killer robots be banned?

Andrew Gibson asks whether, as the defence industry pumps more funds into robotics, we can ever legitimately use unmanned armed robots.

Andrew Gibson is a freelance journalist interested in military robotics, arms control (particularly nuclear), civil wars and politics

South Korea has done a rhetorical U-turn on whether it will deploy fully autonomous robots along the border with North Korea. SGR-A1 sentry robots, which use infra-red cameras, motion detection, communication equipment for the exchange of army codes and are armed with a swivel-mounted K-3 rifles, have been developed and trialled along the misnomic demilitarised zone (DMZ).

Samsung Techwin, the system’s manufacturer, openly advertised the SGR-A1 robot’s autonomous nature on its release in 2006 – it can detect, question and fire upon an intruder without a human operator. However, most people involved with the project have since stressed that decisions on lethality will be taken by soldiers and references to the SGR-A1’s autonomous settings have also been removed from Samsung’s website.

Israel have been experimenting with a similar system along the border with Gaza, but they have been more emphatic about its ‘man-in-the-loop’ status. Clearly, anxiety about deploying autonomous killing machines is a result of political and legal, rather than technological, constraints. Whilst autonomous movement is becoming common in military vehicles, autonomous target acquisition is rarely boasted.

The question is, as the defence industry pumps more funds into robotics, can we ever legitimately use unmanned armed robots?

Autonomous armed robots strike at the heart of International Humanitarian Law (IHL). Firstly, without a human operator, nobody can be held responsible should a war crime be committed. This is the view of the International Committee on Robot Arms Control (ICRAC), who are campaigning for a blanket ban. Secondly, the Fourth Geneva Convention requires warring parties to distinguish between combatants and non-combatants.

From a technological perspective, this is simply not possible. Artificial intelligence, shape recognition (‘hands up, this is a war’) and Friend-Foe Interrogation (FFI) programmes are so rudimentary that modern combat robots are little more than advanced land mines. Today or tomorrow, no autonomous robot will have sufficient perspective to operate within IHL.

Some roboticists believe these problems are surmountable, for technological or doctrinal reasons. John Canning, a US Navy engineer, has proposed a more conservative doctrine that will sidestep some of the legal issues raised by autonomy. Canning proposes autonomous weapons systems that only target weapons or enemy vehicles, extending the concept behind the Patriot Air and Missile Defence System to the wider battlefied.

Canning defines anyone with a weapon as a combatant. He believes possession of a weapon could be roughly gauged through infra-red, shape recognition and similar techniques in the near future and large military vehicles already exchange electronic codes and radio messages. He proposes an automated process of interrogation, similar to the South Korean system, to allow combatants the opportunity to surrender.

Canning argues around the US Rules of Engagement (RoE), which normally disallow fighting in cemeteries, mosques and hospitals: places that are not necessarily on maps and whose engagement necessitates a level of human judgement. He argues that it is the responsibility of the enemy to mark these areas prior to the conflict and therefore the programmers or operators would not be responsible should an autonomous system fire on sensitive sites.

Ronald Arkin, a US Army-funded roboticist, has been developing prototype software for use if and when recognition technology is able to discriminate to a decent level. Arkin’s designs allow the robot to reason within IHL: essentially comparing its pre-programmed mission with facts on the ground, IHL and the RoE. Arkin argues autonomous robots have the potential to act more ethically than soldiers, as they have no desire for self-preservation and no tendency to scenario fulfilment.

However, most of Arkin’s examples of how armed robots could be used take place in designated ‘kill zones’ (such as heavily-leafleted villages and South Korea’s DMZ), where intruders are assumed to be enemy soldiers. He is keen to employ acoustic sensors so robots can automatically return fire, but whether somebody firing on a robot should automatically be considered an active combatant is debatable.

Arkin’s work relies not just on technological optimism but on a bizarre interpretation of the Nuremburg trials. He argues that IHL already provides for morality, such that any legal request is by definition moral. Therefore, soldiers and robots have no responsibility to question orders they believe to be legal. According to this interpretation, the blind obedience of robots makes them more ethical than the average soldier.

The problem with Canning and Arkin is that they give false hope: they encourage investment in autonomous combat robots by suggesting the legal problems will be fixed in some technological hereafter. Arkin’s presumption that any complex situation can be formalised is untrue and Canning’s belief in the reliable exchange of codes between military vehicles will result in more tragedies like the downing of Iran Air Flight 655 in 1988.

Furthermore, the claim a robot will be able to discern whether a wounded soldier is holding a weapon or lying next to it is so far from the current state of robotics, it is like offering to make a biological weapon that only targets bad people. Even if robotics develops to the unlikely state of Arkin’s fantasy, it would be unfair to place legal responsibility on computer programmers or commanding officers should malfunctions or poor decisions occur. As Jutta Weber has repeatedly argued, autonomy is the point.

Intelligent or unintelligent combat robots challenge IHL in myriad ways. However, there is industrial momentum behind them, matched by government support. There also appears to be genuine confusion in some quarters about whether, in certain circumstances, the use of these systems would be illegal. Some have called to regulate the robotics industry, akin to the administration of the Chemical Weapons Convention (CWC), to halt these developments.

This is unrealistic. Autonomy is as much a software as a hardware problem and verification is very difficult. Also, robotics is an especially dual-use industry: every advance in industrial and medical robotics contributes to military robotics.

Nevertheless, some kind of international treaty or agreement would provide legal clarity: it would signal to the military and industrial sectors what is expected of them. The Ottawa Treaty on land mines, the CWC and Biological Weapons Convention are all premised on the Fourth Geneva Convention but are still helpful in articulating why those weapons are wrong and to assert consensus. International treaties are the highest form of eyebrow raising; sometimes, that is all we can do.

25 Responses to “Should killer robots be banned?”

  1. jeff marks

    it wouldn’t be the programmer or designer. it would be whoever allows it to be used.

    And as for where is Mark “100k a year, married with 2 kids, white flight, anarchist and all round champagne socialist” Thomas. who cares

  2. Ayano Noda

    これ怖すぎだろう。RT @leftfootfwd: Should killer robots be banned? http://bit.ly/aJrA2b

  3. AnneJGP

    Banning these things will only prevent their use by nations/groups which choose to recognise the ban. Now they’re possible, they will inevitably be used – somewhere, sometime.

    It follows (a) someone, somewhere, will be obliged to work on counter-measures; (b) someone, somewhere, will work out how to hack into them & turn them; (c) sooner or later the world will have to cope with them and their consequences.

  4. Rob

    “Firstly, without a human operator, nobody can be held responsible should a war crime be committed.”

    Surely this cannot be true? Surely the owner of the robot can still be held responsible for its actions? If it [mal]functions and kills someone illegally, the owner is still responsible for deploying a defective robot.

    Also, if we must have wars, two sets of robots shooting at each other is probably preferable to two sets of humans shooting at each other, though I do worry about the consequences that such a lowering of the political cost of war might bring.

  5. Should killer robots be banned? (Publisher- LeftFootForward.org) « Andrew Gibson's Blog

    […] article was originally published on LeftFootForward.org. The link is- https://www.leftfootforward.org/2010/10/should-killer-robots-be-banned/ Posted in: Uncategorized ← UAVs in the news: This month’s crap review (Sep 2010) […]

Comments are closed.