Autonomous Weapons Are Coming, This is How We Get Them Right

December 2, 2018 Topic: Security Blog Brand: The Buzz Tags: Autonomous WeaponsDronesSea MinesRobotsC3

Autonomous Weapons Are Coming, This is How We Get Them Right

Fully autonomous weapons are not only inevitable; they have been in America’s inventory since 1979.

In November 2017, The Future of Life Institute released its Slaughterbot video which featured swarms of miniature, autonomous drones killing U.S. senators on Capitol Hill and students on a university campus. Hyperbolic, scary and simulating drones beyond today’s capabilities, the video is part of the Institute’s campaign to ban autonomous weapons systems. The Future of Life, which seeks to ban all autonomous weapons, represents one end of the debate over these new technologies. It treats autonomous weapons as fundamentally new and different weapons.

In fact, only the general public awareness of autonomous weapons systems is new. As a Brookings Report notes, autonomous weapons have existed and been in use in a variety of settings for decades. Admittedly, due to technical limitations, the application of autonomy has not been widespread but rather limited to specific operational problems. However, recent advances in task specific or limited artificial intelligence mean that a very wide range of weapons can be made autonomous. Therefore it is important we explore ways to exploit autonomy for national defense while addressing the ethical, legal, operational, strategic and political issues. Perhaps the question generating the most discussion is whether autonomous weapons should be allowed to kill a human being.

Human Rights Watch notes that the level of autonomy granted to weapons systems can vary greatly. Its categorization is worth quoting at length:

Robotic weapons, which are unmanned, are often divided into three categories based on the amount of human involvement in their actions:

Human-in-the-Loop Weapons: robots that can select targets and deliver force only with a human command;

Human-on-the-Loop Weapons: robots that can select targets and deliver force under the oversight of a human operator who can override the robots’ actions; and

Human-out-of-the-Loop Weapons: Robots that are capable of selecting targets and delivering force without any human input or interaction.

The first, human-in-the-loop, provides the greatest level of human supervision of the decision to kill. The human is literally in the decision loop and thus the weapons system cannot complete the kill cycle until the human takes positive action to authorize it. In a human-in-the-loop system, machines analyze the information and present it to the operator. Then that operator must take time to evaluate the information provided and take action to authorize engagement. While this theoretically provides the tightest control, the fact is humans will be too slow to keep up during time-critical engagements. Even if the operator simply accepts the machine’s recommendation, he or she will inevitably slow the response. In an environment of multiple inbound missiles, some traveling faster than the speed of sound, a human cannot process the information fast enough to defend the unit. The Navy recognized this fact thirty years ago when it developed the autonomous mode for the Aegis Combat System as well as its Close-in Weapons Systems to defend the fleet against missile attacks. But technological advances will soon expand the number of engagements that need to be conducted at machine speed.

In the second approach, human-on-the-loop, a human monitors the autonomous system and intervenes only when he or she determines the system is making a mistake. The advantage is that it allows the system to operate at the machine speed necessary to defend the unit while still attempting to provide human supervision. The obvious problem is the human will simply be too slow to analyze all of the system’s actions in a high-tempo engagement. Thus, the human will often be too late in attempts to intervene.

The third, human-out-of-the-loop, is frankly a bad definition. Until artificial intelligence gains the ability to design, build, program and position weapons, humans will both provide input and interact with autonomous systems. At a minimum, humans will set the initial conditions defining the actions of the weapon after its activation. Even something as simple as a land mine requires human input. A human designs the system to detonate under particular conditions. Other humans select where it will be planted to kill the desired targets. Thus, contrary to Human Rights Watch’s definition, just because a weapon does not have a human on or in the loop, does not mean it does not require human input.

Rather than “humans-out-of-the-loop,” this third category is really “human-starts-the-loop.” Autonomous systems do not deliver force “without any human input or interaction.” In fact, autonomous weapons require humans to set engagement parameters in the form of algorithms programmed into the system before employment. And they will not function until a human activates them or “starts the loop.” Thus, even fully “autonomous” systems include a great deal of human input, it is just done before the weapon is employed. This is the system actually in use today for smart sea mines, Patriot Missile System, Aegis Combat System in its Auto-Special or autonomous mode, advanced torpedoes, Harpy drones and Close-in Weapons Systems. In fact, for decades, a variety of nations have owned and used systems that operate on the concept that a “human-start-the loop.”

Unfortunately, much of the discussion today focuses on the first two systems—human-in-the-loop and human-on-the-loop. We already know that neither really works in time-critical engagements. If we limit the discussion to these two systems, we have to either accept the risk of a person responding too slowly to protect his/her own forces or accept the risk the system will get ahead of the human and attack a target it shouldn’t. The literature on these two systems—from books to research papers to articles—is deep and rich but does not thoroughly examine how humans will deal with the increasing requirement to operate at machine speed.

Rather than trying to manage this new reality with the fundamental flaws of the first two approaches, human-starts-the-loop accepts the reality that in modern, time-critical combat humans simply can’t keep up. Of course, human-starts-the-loop is only necessary for these operations. For operations where speed of decision is not a key element, like today’s drone strikes, the only acceptable system remains human-in-the-loop. In these situations, operators have minutes to hours to decide to fire or not. This is more than sufficient time for a human to make the decision. Thus, human-on-the-loop will continue to be used in situations where human speed decision making still works.

That said, since humans are too slow to effectively employ either human-in-the-loop or human-on-the-loop in time-critical engagements, it is more productive to accept reality and focus our research, debates and experimentation on how to thoughtfully implement human-starts-the-loop for time-critical engagements where humans simply can’t keep up.

Fully autonomous weapons are not only inevitable; they have been in America’s inventory since 1979 when it fielded the Captor Anti-Submarine Mine, a torpedo anchored on the bottom that launched when onboard sensors confirmed a designated target was in range. Today, the United States holds a significant inventory of smart sea mines in the form of Quickstrike which are MK 80 series bombs equipped with a Target Detection Device. It also operates torpedoes that become autonomous when the operator cuts the wire. Modern air-to-air missiles can lock-on after launch. At least, six nations operate the Israeli developed Harpy, a fully autonomous drone that is programmed before launch to fly to a specified area and then hunt for specified targets using electromagnetic sensors. The follow on system, Harop, adds visual and infrared sensors as well as an option for human-in-the-loop control. Given the Skydio R1 commercial drone ($2,499) uses visual and infrared cameras to autonomously recognize and follow humans while avoiding potential obstacles like trees, buildings, and other people, it is prudent to assume a Harop could be programmed to use its visual and infrared sensors to identify targets.

And, of course, victim initiated mines (the kind you step on or run into), both land and sea, have been around for well over 100 years. These mines are essentially autonomous. They are unattended weapons that kill humans without another human making that decision. But even these primitive weapons are really human-starts-the-loop weapons. A human designed the detonators to require a certain amount of weight to activate the mine. A human selects where to place them based on an estimation of the likelihood they will kill or maim the right humans. But once they are in place, they are fully autonomous. Thus, much like current autonomous weapons, a human sets the initial conditions and then allows the weapon to function automatically. The key difference between the traditional automatic mine and a smart, autonomous mine, like the Quickstrike, is that the smart mine attempts to discriminate between combatants and non-combatants. Dumb mines don’t care. Thus, smart mines are inherently less likely to hurt non-combatants than older mines.

Fortunately, the discussion today is starting to shift to how “human-starts-the-loop” can minimize both types of risk. The United States is already expanding its inventory of smart sea mines. They comply with current DoD policy that states “autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operations to exercise appropriate levels of human judgement over the use of force.”

By adopting human-starts-the-loop, we can both deal with the operational problem of human limitations and ensure autonomous weapons meet legal and ethical standards. The key element in the success of an autonomous system is setting the parameters for the system. These can range from the simple step of setting the trigger weight for an anti-tank mine high enough that only heavy vehicles will set it off to the sophisticated programming of a smart sea mine to select a target based on the acoustic, magnetic, and pressure signatures unique to a certain type of target. Today’s task-specific or limited artificial intelligence is already capable of making much finer distinctions and cross checking a higher number of variables.