Humans versus robots: The battle begins

This week, the International Committee for Robot Arms Control (ICRAC) is holding a conference in Berlin exploring options for the regulation of “killer” robots.

Andrew Gibson is a freelance journalist interested in military robotics, arms control (particularly nuclear), civil wars and politics

This week, the International Committee for Robot Arms Control (ICRAC) is holding a conference in Berlin to explore options for the regulation of unmanned and fully autonomous combat vehicles. ICRAC, a coalition of roboticists, physicists and philosophers, have invited lawyers, members of the military and arms control wonks to offer perspectives on the feasibility of an international regulatory regime.

ICRAC have stated their goals as follows:

• The prohibition of the development, deployment and use of armed autonomous unmanned systems. Machines should not be allowed to make the decision to kill people;

• Limitations on the range and weapons carried by “man in the loop” unmanned systems and on their deployment in postures threatening to other states;

• A ban on arming unmanned systems with nuclear weapons;

• The prohibition of the development, deployment and use of robot space weapons.

Professor Noel Sharkey, a roboticist from Sheffield University and founding member of ICRAC, told Left Foot Forward:

A number of us were dissatisfied at the lack of international discussion about the role of robots in war. I’m not an expert in ethics and law but I bring technological input.

“My colleague Jürgen Altmann has a lot of experience in the regulation of nuclear and nanotechnology and other committee members are philosophers within the Just War Theory tradition.

“We decided the best way to get some kind of arms control was to engage the international community and have invited a lot of knowledgeable people to the conference to get the ball rolling.”

Whilst ICRAC focuses on both teleoperated (i.e. ‘man-in-the-loop’) and fully autonomous combat vehicles, Professor Sharkey believes the two issues will become inseparable in the future:

“It will be unclear and nobody will make it clear whether UAVs and UGVs are remotely piloted, whether there is just someone working the weapons or whether there’s nobody working the weapons at all.”

He cites Carnegie Mellon’s Crusher, BAE’s Taranis and South Korea’s deployment of SGR-A1 sentry robots as examples of machines capable of autonomy, in which the public have no idea whether there is a human-in-the-loop or not.

Last Saturday, Oxford-based NGO Fellowship of Reconciliation launched their own anti-drone campaign at a one-day workshop called ‘Drone Wars’. The conference considered the history, legality and domestic implications of autonomous and semi-autonomous UAVs, bringing together academics and campaigners from across the country.

Both nationally and internationally, the humans are organising!

Like this article? Sign up to Left Foot Forward's weekday email for the latest progressive news and comment - and support campaigning journalism by making a donation today.

13 Responses to “Humans versus robots: The battle begins”

  1. Tinpot

    RT @leftfootfwd: Humans versus robots: The battle begins:

  2. kimkali

    War machines – awesome as in terrifying – "Machines should not be allowed to make the decision to kill people" = Rule 1

  3. Matthew Sinclair

    [email protected] focussing on the important issues, stopping Skynet:

  4. Nigel Stanley

    RT @leftfootfwd: Humans versus robots: The battle begins: >> Disappointingly, not about Labour, but actual robots.

  5. Gabriel Milland

    RT @mjhsinclair: [email protected] focussing on the important issues, stopping Skynet:

  6. Kate Starling

    @leftfootfwd So, invent killing machine, hold event to get legal framework, sleep at night:

  7. Matthew Taylor (MTPT)

    Washington Naval Treaties, anyone? Of course, were I in the mood to be cruel, I’d draw the logical comparison with the international peace conferences Picasso used to do the posters for…

  8. Jamie_Griff

    Until the advent of genuine Artificial Intelligence surely the use of robots in war makes very little difference? The robots won’t be ‘making decisions’, they’ll be following their programming which, presumably, will be determined (or at least signed off) by the upper echelons of Armed Forces command. So there’s the possibility that we’ll see more accountability from higher-ups for deaths, particularly civilian deaths that take place in the ‘theatre of war’. After all, if you take human error out of the equation (the fire fight is no longer being executed by humans who get afraid, or confused, or disoriented but by robots which will follow only their programming even if it means certain extinction)only the programming of the machines can be blamed for ‘mistakes’ such as disproportionate civilian casualties.
    If and when genuine strong AI develops, well, no one can make any predictions about that as no one can second guess what form this will take.

  9. Philip Hunt

    Machines should not be allowed to make the decision to kill people

    Autonomous killer robots have been around since 1943 — I refer to guided torpedoes with were used by the Americans and Germans in WW2.

  10. Tactical Things

    RT @leftfootfwd: Humans versus robots: The battle begins

  11. Shamik Das

    … see also this on "humans versus robots" on @leftfootfwd last week:

  12. Andrew Gibson

    Jamie- You touch on a number of things which I am about to deal with in an article, either for this website or my own blog. When ICRAC say ‘make decisions’, they mean choosing the target according to heat sensors, gait recognition or whatever. This could only happen in the most rudimentary way and never be able to discriminate between combatants and non-combatants, in ways required by International Humanitarian Law. You rightly identify responsibility as the heart of the matter. I will write about whether programmers should take legal responsibility for fatal errors, should such systems become common
    Sharkey et al do not really believe that satisfactory levels of AI will be reached, unlike US-Navy financed Ronald Arkin, and view these systems as advanced land-mines.

    Philip- The terminology of this subject can get quite murky/misleading. Many military systems are fairly autonomous, as you rightly imply. Anti-missile systems (Phallanx, Patriots) are completely autonomous.

  13. Humans versus robots: The battle begins « Andrew Gibson's Blog

    […] article was originally published on The link is-… Possibly related posts: (automatically generated)Arduino controlled Robot ArmNotesMonkeys Control […]

Leave a Reply