Killer robots: A study in hype

Drones, or ‘remotely piloted air systems’ in RAF speak, are now familiar. However in 2013 a public debate started over what many consider the next phase in drone warfare: lethal autonomous robots.

Drones, or ‘remotely piloted air systems’ in RAF speak, are now familiar. The UK has been operating these systems in Afghanistan and Iraq for several years; we now have a squadron of drone pilots based in Lincolnshire, and unarmed drones are used for various non-military purposes, such as policing and construction.

However in 2013 a public debate started over what many consider the next phase in drone warfare: lethal autonomous robots (LARs).

LARs do not require pilots at all and are essentially the application of artificial intelligence technology to weaponry, with examples including the recently trialled Taranis drone by BAE systems and the SGR-1 sentry gun by Samsung Techwin. The idea is that, based on pre-programmed criteria, senor data and computer reasoning, machines can select and attack targets without human supervision.

In response to the potential (if not the use) of such systems, some high-profile people have spoken out. Christof Heyns, the UN’s Special Rapporteur on extrajudicial, summary or arbitrary executions, has called for a moratorium on the development of LARs. Nobel Laureate and veteran campaigner Jody Williams, among others, has launched an international campaign to pre-emptively ban LARs.

Conversely, US Navy engineer Ronald Arkin continues to claim that LARs are both inevitable and capable of being more humane than human soldiers.

While these debates are important, both sides employ manipulative language and hype. Here are two ways that language and unwarranted assumptions can frustrate a sensible debate on LARs.


Depicting robots as comparable to humans has been a staple of science fiction for decades. However this tendency, at best a metaphor, is also common in public debates about LARs: debates about real pieces of equipment that has real consequences.

For example, one way leading opponents of LARs convey this metaphor is by using the word ‘decide’ in an unqualified manner. In a piece for the Guardian, roboticist Noel Sharkey wrote “we are sleepwalking into a brave new world where robots decide who, where and when to kill”, “fully autonomous robots that make their own decisions about lethality are high on the US military agenda”; and “We are going to give decisions on human fatality to machines that are not bright enough to be called stupid”.

Writing in this way invokes a particular repertoire of sci-fi imagery, which subtly mislead the audience. In reality, human and robot ‘decision-making’ are fundamentally different: one could argue that robots never really ‘make decisions’ but apply basic ‘if-then’ reasoning to data collected from environments in which humans have placed them.

They are not ‘clever’ or ‘stupid’: they are machines.

Technological progress

Another way the LAR debate has been framed, with varying degrees of subtlety, is to assume the technology progresses linearly. This is the view that, with time, robots will only ever become more sophisticated, autonomous and human-like. The below example is taken from an MoD report about the UK’s approach to drones and graphically represents this type of assumption.


Implicit and explicit assumptions about the direction of the technology are, however, usually conveyed through language. Consider the below use of modal verbs in an extract from the MoD’s wildly speculative Global Strategic Trends report:

“As the information revolution continues, there will be a pervasive and dramatic growth in the role of unmanned, autonomous and intelligent systems…Systems will exhibit a range of autonomy levels from fully autonomous to significantly automated and self-coordinating, while still under high-level human command. Systems may have human-like mobility and user interfaces to act as assistants, while other designs may consist of collaborative networks of smart sensors, weapon systems or transportation platforms, treated as smart tools.”

So what?

Metaphor and framing are often necessary when communicating difficult scientific issues. However, framing can also be politically motivated and requires careful monitoring.

In the case of LARs, the hyped, sci-fi presentation of the technology can distract people from real issues at hand, such as the tawdry, automated nature of the weapon systems. Similarly, the idea of ceaseless technological progress promises fixes to moral and legal issues with LARs: fixes which are assumed rather than certain.

Like this article? Sign up to Left Foot Forward's weekday email for the latest progressive news and comment - and support campaigning journalism by making a donation today.