Advertisement

Will we be able to control the killer robots of tomorrow?

Not without significant international cooperation, no.

From ship-hunting Tomahawk missiles and sub-spying drone ships to semi-autonomous UAV swarms and situationally-aware reconnaissance robots, the Pentagon has long sought to protect its human forces with the use of robotic weapons. But as these systems gain ever-greater degrees of intelligence and independence, their increasing autonomy has some critics worried that humans are ceding too much power to devices whose decision-making processes we don't fully understand (and which we may not be entirely able to control).

What constitutes an Autonomous Weapon System (AWS) depends on who you ask, as these systems exhibit varying degrees of independence. Sense and React to Military Objects (SARMO) weapons like the Phalanx and C-RAM are able to react to incoming artillery and missile threats, targeting and engaging them without human oversight. However these aren't fully-autonomous, per se -- they simply perform a set automated task. They're no more "intelligent" than the assembly line robots that welded your car's frame together. There is no decision-making, only a response to an external stimulus.

Fully autonomous weapons capable of selecting, identifying and engaging with targets of their own choice without human input (think Terminators) have not yet been fielded by any nation, despite what Russia is claiming. However, a number of countries including China, the UK, Israel and, of course, the US and Russia are working on their direct precursors. As such, now is the time to devise a regulatory framework, the International Committee for Robot Arms Control (ICRAC) argued before a United Nations "Meeting of Experts" in 2014.

To a degree, both the Pentagon and the UK's Ministry of Defense (MoD) have worked out internal guidelines for AWS development. In 2012, the Pentagon issued Directive 3000.09, which dictates that "autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force." Similarly, Lord Astor of Hever told Parliament in 2013, "The MoD currently has no intention of developing systems that operate without human intervention."

Directive 3000.09 further defines an AWS as "a weapon system that, once activated, can select and engage targets without further intervention by a human operator." This differs from a semi-autonomous weapon, which only identifies and presents targets for a human to select. Whether there is a human operator "in the loop" with the ability to override the AWS' targeting decision or there is no human oversight at all ("out of the loop") makes no difference to the AWS. So what does the DoD mean by "appropriate levels of human judgment"?

"We've debated with the US about this and they won't say what they mean by 'appropriate control'," Noel Sharkey, professor of AI and robotics at the University of Sheffield and chair of the International Committee for Robot Arms Control, told Engadget. "In the last four years since [the ICRAC] have been campaigning at the UN, it's always 'Everything will be in controlled by humans,' but nobody will say in what way it will be controlled by humans."

Sharkey argues that the "appropriate control" argument leads to a slippery slope of responsibilities depending on how one defines "control." In his book, Autonomous Weapons Systems: Law, Ethics, Policy, Sharkey lays out a five-point scale of what could constitute decreasing levels of human control:

All of these schemes have their relative benefits, depending on the situation and the weapon system being controlled. In fact, the Patriot Missile System operates under engagement rules similar to Point 4. Weapons that would adhere to Point 5 have not yet reached the battlefield, though not due to lack of interest. UAVs like the BAE Taranis are being equipped with the ability to locate, identify and engage (but only with the OK from mission command), though it can also autonomously defend itself from incoming fire and enemy aircraft. And although Samsung stringently denies the allegations, its SGR-A1 defense turret, which is currently deployed along the Korean Demilitarized Zone, is rumored to be capable of identifying and engaging enemy forces completely on its own.

Despite Sharkey's assertion that the Pentagon has been reticent to define "appropriate control," a Department of Defense spokesman pointed out to Engadget that General Paul J. Selva, the vice chairman of the Joint Chiefs of Staff, stated during recent Senate testimony that he does not think "it's reasonable to put robots in charge of whether we take a human life." The DoD cites its decades-long use of the Aegis system as evidence of its responsible operation of autonomous weapons (or at least those with autonomous features).

"Context and environment matter in determining the appropriate level of human judgment to be exercised in the use of force," the DoD spokesman said. "Like any other weapon, a given autonomous weapon system may be appropriate for use in one operational environment and purpose, but not another."

Given the Pentagon's established stance in favor of maintaining a human in the loop (at least in some form), even having an AWS ready for combat doesn't guarantee that it would be used in future conflicts. The decision to use an AWS would balance on a number of factors including "trust in the system, training, level of risk associated with the situation," the DoD spokesman said, and would be equally influenced by the operator's "workload, stress and experience."

That said, the Pentagon has not completely ruled out removing human oversight should our adversaries decide to do the same. "The DoD's autonomy community is fully aware that the legal and ethical frameworks developed by the United States differ from both our current and potential adversaries," a DoD spokesperson said. And should, say, China or Russia develop a devastating new AI-controlled AWS, the Pentagon will "have to come up with solutions where responses occur much faster than 'human in the loop' allows."

What's more, Just Security's Paul Scharre argues that world militaries have good reason to maintain human control: the ability to re-target these multi-million-dollar munitions mid-fight should the situation on the ground change while the weapon is in transit.

Sharkey, however, is unimpressed. "Over the years working at the UN, it has become very clear to me that none of that is about killing less civilians," he said. "The goal here is to not waste expensive munitions. So you want to be on target as much as possible... but if it's a high-value target, as they say under the Principle of Proportionality, then it doesn't matter about a few civilians or a hundred civilians if it's bin Laden or whatever." Indeed, the world recently witnessed the destruction wrought by precision munitions when used in sufficient quantities during the liberation (and devastation) of Mosul, Iraq.

The professor is also concerned with these weapons systems' ability to be used for covert target identification and acquisition. "Connected to the cloud in order to work in tandem with other robots, they would be the perfect tools to ID and track large numbers of people from afar and from the air," Sharkey argued in a 2015 Wall Street Journal op-ed. "The threat of future attacks would make these robots hard to put away again." Especially if that technology should fall into the hands of extremist non-state actors.

So if we're opening a Pandora's box of Skynets, as Sharkey suggests, why not simply ban them outright, like we did with land mines and chemical weapons? As the DoD argued above, ban treaties are great and all but only when everybody adheres to them.

"The regulatory regimes that are specific to nuclear weapons or to chemical weapons cannot be readily applied to artificial intelligence or to weapons that employ AI or autonomy," the DoD spokesman said. But unlike chemical or nuclear weapons, which require difficult-to-acquire materials to be deadly, autonomous weapons systems can be created using existing and often commercially available components.

So, in the end, it certainly appears that artificially intelligent weapons, like the atom bomb and V-2 rocket before them, will eventually make their way to future battlefields whether we like it or not. Just as with nuclear technology, AI's potential for misuse does not guarantee such misuse will occur. It's up to the world community to come together and decide, once and for all, what we want our collective future to look like. And whether or not we want it run by Terminators.

  1. The human identifies, selects and engages the target, initiating all attacks

  2. The AI suggests target alternatives but the human still initiates the attack

  3. The AI chooses the targets but the human must approve before the attack is initiated

  4. The AI chooses the targets and initiates the attack but the human has veto power

  5. The AI identifies, selects and engages the target without any human oversight