Should killer robots be banned?

Andrew Gibson asks whether, as the defence industry pumps more funds into robotics, we can ever legitimately use unmanned armed robots.

Andrew Gibson is a freelance journalist interested in military robotics, arms control (particularly nuclear), civil wars and politics

South Korea has done a rhetorical U-turn on whether it will deploy fully autonomous robots along the border with North Korea. SGR-A1 sentry robots, which use infra-red cameras, motion detection, communication equipment for the exchange of army codes and are armed with a swivel-mounted K-3 rifles, have been developed and trialled along the misnomic demilitarised zone (DMZ).

Samsung Techwin, the system’s manufacturer, openly advertised the SGR-A1 robot’s autonomous nature on its release in 2006 – it can detect, question and fire upon an intruder without a human operator. However, most people involved with the project have since stressed that decisions on lethality will be taken by soldiers and references to the SGR-A1’s autonomous settings have also been removed from Samsung’s website.

Israel have been experimenting with a similar system along the border with Gaza, but they have been more emphatic about its ‘man-in-the-loop’ status. Clearly, anxiety about deploying autonomous killing machines is a result of political and legal, rather than technological, constraints. Whilst autonomous movement is becoming common in military vehicles, autonomous target acquisition is rarely boasted.

The question is, as the defence industry pumps more funds into robotics, can we ever legitimately use unmanned armed robots?

Autonomous armed robots strike at the heart of International Humanitarian Law (IHL). Firstly, without a human operator, nobody can be held responsible should a war crime be committed. This is the view of the International Committee on Robot Arms Control (ICRAC), who are campaigning for a blanket ban. Secondly, the Fourth Geneva Convention requires warring parties to distinguish between combatants and non-combatants.

From a technological perspective, this is simply not possible. Artificial intelligence, shape recognition (‘hands up, this is a war’) and Friend-Foe Interrogation (FFI) programmes are so rudimentary that modern combat robots are little more than advanced land mines. Today or tomorrow, no autonomous robot will have sufficient perspective to operate within IHL.

Some roboticists believe these problems are surmountable, for technological or doctrinal reasons. John Canning, a US Navy engineer, has proposed a more conservative doctrine that will sidestep some of the legal issues raised by autonomy. Canning proposes autonomous weapons systems that only target weapons or enemy vehicles, extending the concept behind the Patriot Air and Missile Defence System to the wider battlefied.

Canning defines anyone with a weapon as a combatant. He believes possession of a weapon could be roughly gauged through infra-red, shape recognition and similar techniques in the near future and large military vehicles already exchange electronic codes and radio messages. He proposes an automated process of interrogation, similar to the South Korean system, to allow combatants the opportunity to surrender.

Canning argues around the US Rules of Engagement (RoE), which normally disallow fighting in cemeteries, mosques and hospitals: places that are not necessarily on maps and whose engagement necessitates a level of human judgement. He argues that it is the responsibility of the enemy to mark these areas prior to the conflict and therefore the programmers or operators would not be responsible should an autonomous system fire on sensitive sites.

Ronald Arkin, a US Army-funded roboticist, has been developing prototype software for use if and when recognition technology is able to discriminate to a decent level. Arkin’s designs allow the robot to reason within IHL: essentially comparing its pre-programmed mission with facts on the ground, IHL and the RoE. Arkin argues autonomous robots have the potential to act more ethically than soldiers, as they have no desire for self-preservation and no tendency to scenario fulfilment.

However, most of Arkin’s examples of how armed robots could be used take place in designated ‘kill zones’ (such as heavily-leafleted villages and South Korea’s DMZ), where intruders are assumed to be enemy soldiers. He is keen to employ acoustic sensors so robots can automatically return fire, but whether somebody firing on a robot should automatically be considered an active combatant is debatable.

Arkin’s work relies not just on technological optimism but on a bizarre interpretation of the Nuremburg trials. He argues that IHL already provides for morality, such that any legal request is by definition moral. Therefore, soldiers and robots have no responsibility to question orders they believe to be legal. According to this interpretation, the blind obedience of robots makes them more ethical than the average soldier.

The problem with Canning and Arkin is that they give false hope: they encourage investment in autonomous combat robots by suggesting the legal problems will be fixed in some technological hereafter. Arkin’s presumption that any complex situation can be formalised is untrue and Canning’s belief in the reliable exchange of codes between military vehicles will result in more tragedies like the downing of Iran Air Flight 655 in 1988.

Furthermore, the claim a robot will be able to discern whether a wounded soldier is holding a weapon or lying next to it is so far from the current state of robotics, it is like offering to make a biological weapon that only targets bad people. Even if robotics develops to the unlikely state of Arkin’s fantasy, it would be unfair to place legal responsibility on computer programmers or commanding officers should malfunctions or poor decisions occur. As Jutta Weber has repeatedly argued, autonomy is the point.

Intelligent or unintelligent combat robots challenge IHL in myriad ways. However, there is industrial momentum behind them, matched by government support. There also appears to be genuine confusion in some quarters about whether, in certain circumstances, the use of these systems would be illegal. Some have called to regulate the robotics industry, akin to the administration of the Chemical Weapons Convention (CWC), to halt these developments.

This is unrealistic. Autonomy is as much a software as a hardware problem and verification is very difficult. Also, robotics is an especially dual-use industry: every advance in industrial and medical robotics contributes to military robotics.

Nevertheless, some kind of international treaty or agreement would provide legal clarity: it would signal to the military and industrial sectors what is expected of them. The Ottawa Treaty on land mines, the CWC and Biological Weapons Convention are all premised on the Fourth Geneva Convention but are still helpful in articulating why those weapons are wrong and to assert consensus. International treaties are the highest form of eyebrow raising; sometimes, that is all we can do.

Like this article? Sign up to Left Foot Forward's weekday email for the latest progressive news and comment - and support campaigning journalism by making a donation today.

25 Responses to “Should killer robots be banned?”

  1. Dr Vanessa Crawford

    RT @leftfootfwd: Should killer robots be banned?

  2. Simon Sayer

    RT @leftfootfwd: Should killer robots be banned?

  3. Andy Buckley

    RT @leftfootfwd: Should killer robots be banned?

  4. Owen Lewery

    Yes, yes, yes: @leftfootfwd: Should killer robots be banned?

  5. Kunglu

    RT @leftfootfwd: Should killer robots be banned? << Not if they're designed solely to take out Tories…

  6. jeff marks

    unmanned killer war robots! excellent!

    i’ll have 5!

  7. Marcus A. Roberts

    “@leftfootfwd: Should killer robots be banned?” #tweetsworthretweetingthankstoheadlinesalone

  8. Owen

    If a war crime results from the actions of a robot, wouldn’t the programmer be held legally responsible? If so, couldn’t that be an effective deterrent to using robots liable to cause innocent deaths?

  9. Hal

    I’m sorry, Andrew. I’m afraid I can’t do that.

  10. John Beecher

    RT @leftfootfwd Should killer robots be banned? – For those of you who feel like pondering a terrifying ethical issue.

  11. Michael Vaillancourt

    RT @Blatherus: Yes, yes, yes: @leftfootfwd: Should killer robots be banned?

  12. George McLean

    I couldn’t get halfway through this on a Saturday evening. Too much “toys for boys” and too much Fitou. But “nobody can be held responsible should a war crime be committed”? Even if true, the law could be easily changed so that the designer or programmer could be indicted. Whether the law will be is another question. Where’s Mark Thomas when you need him?

  13. TransBotica

    Should killer robots be banned? | Left Foot Forward

  14. TransBotica

    Should killer robots be banned? | Left Foot Forward

  15. Sue Moore

    Should killer robots be banned? | Left Foot Forward

  16. jeff marks

    it wouldn’t be the programmer or designer. it would be whoever allows it to be used.

    And as for where is Mark “100k a year, married with 2 kids, white flight, anarchist and all round champagne socialist” Thomas. who cares

  17. Ayano Noda

    これ怖すぎだろう。RT @leftfootfwd: Should killer robots be banned?

  18. AnneJGP

    Banning these things will only prevent their use by nations/groups which choose to recognise the ban. Now they’re possible, they will inevitably be used – somewhere, sometime.

    It follows (a) someone, somewhere, will be obliged to work on counter-measures; (b) someone, somewhere, will work out how to hack into them & turn them; (c) sooner or later the world will have to cope with them and their consequences.

  19. Rob

    “Firstly, without a human operator, nobody can be held responsible should a war crime be committed.”

    Surely this cannot be true? Surely the owner of the robot can still be held responsible for its actions? If it [mal]functions and kills someone illegally, the owner is still responsible for deploying a defective robot.

    Also, if we must have wars, two sets of robots shooting at each other is probably preferable to two sets of humans shooting at each other, though I do worry about the consequences that such a lowering of the political cost of war might bring.

  20. Should killer robots be banned? (Publisher- « Andrew Gibson's Blog

    […] article was originally published on The link is- Posted in: Uncategorized ← UAVs in the news: This month’s crap review (Sep 2010) […]

  21. Andrew Gibson

    Chaps- I thought the comment about legal responsibility would cause a minor revolt. I suppose it is poorly worded. Perhaps ‘there would not be a chain of legal responsibility I would find satisfying’ would be better.
    In general, I agree with Owen and don’t think these systems should be deployed/developed in the first place. But, if they were (and operated conservatively as Arkin proposes), I still feel there would be a certain unfairness in prosecuting the programmer for glitches. It is like prosecuting an enraged soldier’s parents for raising them badly (ie. they no longer control the solder but are, in some way, responsible). However, I do feel I need to iron out my thinking here, so thanks for the criticism.
    Incidentally, Arkin does not claim that autonomous robots will be 100% war crime free; rather, he thinks they will do better than humans (who do v. badly).

    Rob- I object to the word ‘defective’ in your comment but I realise it is in response to my poorly expressed comment on responsibility, which falsely implies that you can have a non-defective autonomous armed robot. I think your second comment is gives succour to the techno-fetishists I oppose and is perhaps too idealised. But thanks for engaging with the question seriously, which many people do not.

  22. jonathan

    High level claims about the robots effectiveness may not match a lower level glitch in the programming which under unforseen conditions may lead to a system failure and a chaotic outcome. Just compare the billions of person hours that have gone into developing present computer power that can be purchased in any high street shop and the millions of purchasers who have enabled shortcomings to be revealed. Just how sure are the programmers they have thought of all possibilities, without extensive real time testing? I suppose the robot killing machine analogy would be to the effect that they would become over time more specific and less prone to eliminate misidentified targets (ie., innocent people).The exclamation “Sorry about that we didn’t intend that, I’m sure we can improve things next time.” does not sound a good rationale for a defence against unintended killings.

  23. Andrew Roche

    Just read: Should killer robots be banned?

  24. John Rentoul

    "Should killer robots be banned?" Left Foot Forward Probably best not to try to answer that one (via @Conorpope)

  25. North Korea: What Kim Jong-un should do | Left Foot Forward

    […] Should killer robots be banned? – Andrew Gibson, October 16th 2010 Share | Permalink | Leave a comment […]

Leave a Reply