This week, the International Committee for Robot Arms Control (ICRAC) is holding a conference in Berlin exploring options for the regulation of “killer” robots.
Andrew Gibson is a freelance journalist interested in military robotics, arms control (particularly nuclear), civil wars and politics
This week, the International Committee for Robot Arms Control (ICRAC) is holding a conference in Berlin to explore options for the regulation of unmanned and fully autonomous combat vehicles. ICRAC, a coalition of roboticists, physicists and philosophers, have invited lawyers, members of the military and arms control wonks to offer perspectives on the feasibility of an international regulatory regime.
ICRAC have stated their goals as follows:
• The prohibition of the development, deployment and use of armed autonomous unmanned systems. Machines should not be allowed to make the decision to kill people;
• Limitations on the range and weapons carried by “man in the loop” unmanned systems and on their deployment in postures threatening to other states;
• A ban on arming unmanned systems with nuclear weapons;
• The prohibition of the development, deployment and use of robot space weapons.
Professor Noel Sharkey, a roboticist from Sheffield University and founding member of ICRAC, told Left Foot Forward:
“A number of us were dissatisfied at the lack of international discussion about the role of robots in war. I’m not an expert in ethics and law but I bring technological input.
“My colleague Jürgen Altmann has a lot of experience in the regulation of nuclear and nanotechnology and other committee members are philosophers within the Just War Theory tradition.
“We decided the best way to get some kind of arms control was to engage the international community and have invited a lot of knowledgeable people to the conference to get the ball rolling.”
Whilst ICRAC focuses on both teleoperated (i.e. ‘man-in-the-loop’) and fully autonomous combat vehicles, Professor Sharkey believes the two issues will become inseparable in the future:
“It will be unclear and nobody will make it clear whether UAVs and UGVs are remotely piloted, whether there is just someone working the weapons or whether there’s nobody working the weapons at all.”
He cites Carnegie Mellon’s Crusher, BAE’s Taranis and South Korea’s deployment of SGR-A1 sentry robots as examples of machines capable of autonomy, in which the public have no idea whether there is a human-in-the-loop or not.
Last Saturday, Oxford-based NGO Fellowship of Reconciliation launched their own anti-drone campaign at a one-day workshop called ‘Drone Wars’. The conference considered the history, legality and domestic implications of autonomous and semi-autonomous UAVs, bringing together academics and campaigners from across the country.
Both nationally and internationally, the humans are organising!
13 Responses to “Humans versus robots: The battle begins”
Kate Starling
@leftfootfwd So, invent killing machine, hold event to get legal framework, sleep at night: http://bit.ly/b3L0u8
Matthew Taylor (MTPT)
Washington Naval Treaties, anyone? Of course, were I in the mood to be cruel, I’d draw the logical comparison with the international peace conferences Picasso used to do the posters for…
Jamie_Griff
Until the advent of genuine Artificial Intelligence surely the use of robots in war makes very little difference? The robots won’t be ‘making decisions’, they’ll be following their programming which, presumably, will be determined (or at least signed off) by the upper echelons of Armed Forces command. So there’s the possibility that we’ll see more accountability from higher-ups for deaths, particularly civilian deaths that take place in the ‘theatre of war’. After all, if you take human error out of the equation (the fire fight is no longer being executed by humans who get afraid, or confused, or disoriented but by robots which will follow only their programming even if it means certain extinction)only the programming of the machines can be blamed for ‘mistakes’ such as disproportionate civilian casualties.
If and when genuine strong AI develops, well, no one can make any predictions about that as no one can second guess what form this will take.
Philip Hunt
Machines should not be allowed to make the decision to kill people
Autonomous killer robots have been around since 1943 — I refer to guided torpedoes with were used by the Americans and Germans in WW2.
Tactical Things
RT @leftfootfwd: Humans versus robots: The battle begins http://bit.ly/cyPTYj