Killer robots: A study in hype

Drones, or ‘remotely piloted air systems’ in RAF speak, are now familiar. However in 2013 a public debate started over what many consider the next phase in drone warfare: lethal autonomous robots.

Drones, or ‘remotely piloted air systems’ in RAF speak, are now familiar. The UK has been operating these systems in Afghanistan and Iraq for several years; we now have a squadron of drone pilots based in Lincolnshire, and unarmed drones are used for various non-military purposes, such as policing and construction.

However in 2013 a public debate started over what many consider the next phase in drone warfare: lethal autonomous robots (LARs).

LARs do not require pilots at all and are essentially the application of artificial intelligence technology to weaponry, with examples including the recently trialled Taranis drone by BAE systems and the SGR-1 sentry gun by Samsung Techwin. The idea is that, based on pre-programmed criteria, senor data and computer reasoning, machines can select and attack targets without human supervision.

In response to the potential (if not the use) of such systems, some high-profile people have spoken out. Christof Heyns, the UN’s Special Rapporteur on extrajudicial, summary or arbitrary executions, has called for a moratorium on the development of LARs. Nobel Laureate and veteran campaigner Jody Williams, among others, has launched an international campaign to pre-emptively ban LARs.

Conversely, US Navy engineer Ronald Arkin continues to claim that LARs are both inevitable and capable of being more humane than human soldiers.

While these debates are important, both sides employ manipulative language and hype. Here are two ways that language and unwarranted assumptions can frustrate a sensible debate on LARs.

Anthropomorphism

Depicting robots as comparable to humans has been a staple of science fiction for decades. However this tendency, at best a metaphor, is also common in public debates about LARs: debates about real pieces of equipment that has real consequences.

For example, one way leading opponents of LARs convey this metaphor is by using the word ‘decide’ in an unqualified manner. In a piece for the Guardian, roboticist Noel Sharkey wrote “we are sleepwalking into a brave new world where robots decide who, where and when to kill”, “fully autonomous robots that make their own decisions about lethality are high on the US military agenda”; and “We are going to give decisions on human fatality to machines that are not bright enough to be called stupid”.

Writing in this way invokes a particular repertoire of sci-fi imagery, which subtly mislead the audience. In reality, human and robot ‘decision-making’ are fundamentally different: one could argue that robots never really ‘make decisions’ but apply basic ‘if-then’ reasoning to data collected from environments in which humans have placed them.

They are not ‘clever’ or ‘stupid’: they are machines.

Technological progress

Another way the LAR debate has been framed, with varying degrees of subtlety, is to assume the technology progresses linearly. This is the view that, with time, robots will only ever become more sophisticated, autonomous and human-like. The below example is taken from an MoD report about the UK’s approach to drones and graphically represents this type of assumption.

Drones1

Implicit and explicit assumptions about the direction of the technology are, however, usually conveyed through language. Consider the below use of modal verbs in an extract from the MoD’s wildly speculative Global Strategic Trends report:

“As the information revolution continues, there will be a pervasive and dramatic growth in the role of unmanned, autonomous and intelligent systems…Systems will exhibit a range of autonomy levels from fully autonomous to significantly automated and self-coordinating, while still under high-level human command. Systems may have human-like mobility and user interfaces to act as assistants, while other designs may consist of collaborative networks of smart sensors, weapon systems or transportation platforms, treated as smart tools.”

So what?

Metaphor and framing are often necessary when communicating difficult scientific issues. However, framing can also be politically motivated and requires careful monitoring.

In the case of LARs, the hyped, sci-fi presentation of the technology can distract people from real issues at hand, such as the tawdry, automated nature of the weapon systems. Similarly, the idea of ceaseless technological progress promises fixes to moral and legal issues with LARs: fixes which are assumed rather than certain.

5 Responses to “Killer robots: A study in hype”

  1. swatnan

    The use of armed drones is the cowards way of fighting, as bad a suicide bombings.
    The US military personnel should be ashamed of themselves.

  2. blarg1987

    On the flip side it may make conflicts less costly as you are not going through armies or large amounts of civilians to kill a specific person.

    unfortunately most if not all wars do involve civillian deaths and it is always a good thing to try and reduce these as much as possible.

    lets do a scenario:

    A soldier uses a laser designator to target a specific enemy structure, a pilot in an aircraft drops a laser guided bomb, the bomb is on target. Suddenly the soldies takes fire and the pilot picks up a SAM launch.

    Both parties take self defence measures, as a consequence the laser designator is no longer on the target it was meant to be and the pilot can’t destroy the munition as he is taking evasive action, the consequence is that the bomb misses and ends up landing somewhere else, possibly a field where nobody is in it or possibly the centre of a town.

    Drones allow the above to take place and if they take fire, the pilot can abort the mnition more easily as he is not focused on taking abortive action as much as hiss life is not on the line so he can make more clear and concise judgements.

    The problem with this debate is that it is a few years late, you have smart munitons since the 80’s such as cruise missiles etc.

    What I do believe is that any drone that is used should not be autonomous when taking any military action or on station.

  3. swatnan

    As you’ve said Wars involve a loss of life, often its a calculated risk knowing full well that perhaps 3/4 of your men or pilots will not return. Both Haig in WWI and Churchill must have known they were sending men deliberately to their deaths. Its a tough decision. Even bombing civilian targets like Dresden , or Shockn Awe tactics of the US in iraq and Afghanistan involve a calculated loss of life. But both sides do it. Sometimes honour has to be maintained even in |War and even in death. But their is no honour i sending robots to do your fighting when the enemy have less sophisticated often homemade weapons.
    The mistake we make is that we should gi fighting the enemy on the terms they’ve descended to when it comes to guerilla warfare. Cast aside the accepted Riules of engagement if the enemy refuse to accept conduct in War.
    They have only their selves to blame. Sometimes we must stoop to their level.

  4. ComputerScientist

    This is the worst kind of “journalism” that takes some old news and tries to be clever by attempting a new slant. It failed badly here. The guardian article referred to was published in 2007 and Arkin’s army funded project terminated in 2009. And it does not even get the facts straight: Arkin was never a naval engineer. He is an academic robotics professor at Georgia Tech.

    In terms of computer’s making decision being hype or anthropomorphism, the author has clearly never heard of mathematical decision theory or have any idea of the basic of computer programming in which an ‘if, then, else’ statement is called a ‘decision’ and used as the basis for decision trees. Of course maybe the whole discipline of computer science is being anthropomorphic. But even so, on word does hardly a hype maketh.

    However, this is irrelevant. In his desperate search to try to find something new in old news (which has been well discussed at length in many other places over the last 7 years), the author entirely misses the point about delegating the decision to kill to machines. No matter what you call it, there are serious plans afoot to create machines that once activated will target and attack without further human intervention – (read for example DoD directive 3000.09, November 2012). The word ‘decision to kill’ seems quite appropriate here for a newspaper article but maybe you have a better way to put it Mr Gibson) So why pick on the single word “decision” and call it a sci-fi representation other than trying to make yourself look clever. It is a rather odd use of the word “hype”. The author of this piece is obfuscating the facts and making this into a trivial issue – which it clearly is not.

    Not only has the UN special rapportueur Christof Heyns taken this to the Human Rights Council calling for a moratorium but 117 Nations gave a mandate to the Committee for Certain Conventional Weapons to begin expert discussions on the issues in May 2014. 44 nations have spoken out about killer robots at the UN between May and November 2013. And it was an important issue raised by Ban-Ki Moon at his annual address to the UN General Assembly.

    So to the author, Andrew Gibson, who used to be good on humanitarian issues, I would say stop this pretentious journalism of trying to be clever with old news and write about what is actually going on and what the dangers are of these new planned weapons and their potential proliferation. Never mind babbling on about hype from old newspaper articles. Why not report on the 51 NGOs from 23 countries that are currently working very hard in a push for a new prohibitive treaty ban.

    Why not do some ‘actual’ journalism and say something new and useful?

  5. motaman

    Maybe this article should have been titled, Not Clever Enough to be Called Stupid. The writer seems to have missed the irony of that statement which, to me, reads as saying that machines are neither clever of stupid. How could he have missed that?

Comments are closed.