Taking the Measure of AI and National Security

Taking the Measure of AI and National Security

Transformative technologies should not be abandoned due to their risks. Instead, we should focus on understanding the threats, implementing necessary countermeasures, and utilizing the tools in a safer and more secure manner.

The rise of artificial intelligence and machine learning gives new meaning to the measure-countermeasure dynamic—that is, the continuous evolution of defensive and offensive capabilities. The development of large language models, in particular, underscores the necessity of understanding and managing this dynamic.

Large language models, like GPT-4, can generate human-like text based on the input they receive. They are trained on vast quantities of data and can generate coherent and contextually relevant responses. Large language models hold great promise across a multitude of fields, including cybersecurity, healthcare, finance, and others. But as with any powerful tool, the models also pose challenges.

Bill Gates and other technology leaders have warned that AI is an existential risk and that “mitigating the risk of extinction from AI should be a global priority.” At the same time, Gates recently posted a blog post entitled, “The risks of AI are real but manageable.” Part of managing AI will depend on understanding the measure-countermeasure dynamic, which is central to the progression and governance of AI development but is also one of its least appreciated features.

Policymakers consistently face the challenge of rapid technological advancements and their associated threats, outpacing the creation of relevant policies and countermeasures. In the field of AI, emergent capability crises—ranging from deepfakes to potential existential risks—arise from the inherent unpredictability of technological development, influenced by geopolitical shifts and societal evolution. As a result, policy frameworks will almost always lag the state of technology.

The measure-countermeasure dynamic arises from this reality and calls for an approach we term “sequential robustness.” This approach is rooted in the paradoxical persistence of uncertainty, influenced by factors such as rapid technology development and geopolitical shifts. Unlike traditional policy approaches, sequential robustness acknowledges and accepts the transient nature of current circumstances. By adopting this perspective, policymakers can immediately address problems with existing policy solutions, examine challenges without current solutions, and continue to study emerging threats. While pursuing an ideal solution is commendable, policymakers must prioritize actionable steps. Perfection is unattainable, but prompt and informed action is an essential first step.

Indeed, sequential robustness offers a reassuring perspective. Regardless of whether AI represents an existential risk or not, this approach reminds us that we are in a cycle of continuous action and reaction. Our focus should be on making the most informed next move. While it’s important to consider the long-term implications of our decisions, we do not need to have all the answers right now—which is a good thing because we certainly do not have them. We do not need to immediately address every AI challenge, just those that are most urgent. As we will discuss, the measure-countermeasure dynamic rarely provides a perfect solution; instead, it creates strategies that delay, deter, mitigate, or reduce harmful outcomes. This continuous cycle of less-than-complete but adaptive solutions characterizes the essence of the sequential robustness approach.

Sequential robustness has played out dramatically in recent history. The stunning rise of aviation made possible the tragic events of 9/11. In response, Congress established the Transportation Security Administration, which introduced the countermeasures of reinforced cockpit doors and intensive passenger screening. The agency later introduced the liquids rule, limiting the volume of liquids passengers can bring onboard, in response to the 2006 transatlantic aircraft plot. Later, full-body scanners were introduced in response to the attempted “underwear bomber” incident in 2009.

Similarly, following the anthrax attacks in 2001, the United States took decisive measures to bolster its biodefense capabilities and responses. The Department of Homeland Security initiated the BioWatch program, designed to detect the release of aerosolized biological agents rapidly. Realizing the broad scope of potential biological threats, the United States released “Biodefense for the 21st Century” in 2004, emphasizing a comprehensive approach to tackle intentional bioterrorism threats. This focus was further honed with the emergence of global health threats like SARS and avian influenza, leading to the enactment of the Pandemic and All-Hazards Preparedness Act in 2006, which aimed to ensure the nation’s readiness for a wide range of public health emergencies.

Today, sequential robustness is especially salient given the proliferation of large language models. Malign users could exploit these models to create malicious software or disinformation campaigns. Priority should be given to building ethically driven and robust algorithms while implementing comprehensive policies that deter misuse.

Interestingly, large language models can serve as potent tools to develop robustness. Large language models can be employed to improve code quality and strengthen defenses against cyberattacks. Industry could also use the models to spot disturbing user activity or to automate penetration testing, which simulates attacks to help make computer systems more resilient.

A poignant example of large language models’ potential national security benefit lies in the intelligence failures that preceded the 9/11 attacks. With the information overload and disparate data points at the time, as cataloged in The 9/11 Commission Report, critical dots—like the Phoenix Memo warning about possible terrorists at civil aviation schools, the cautions about Al Qaeda from former CIA officer J. Cofer Black, and the CIA’s “Bin Ladin Determined To Strike in U.S.” President’s Daily Brief on August 6, 2001—failed to be connected. The ability of large language models to parse vast amounts of information and uncover connections that humans could overlook might have prevented these catastrophic oversights through timely countermeasures. 

To address biological threats, international organizations pursue countermeasures such as strengthening international bioweapon conventions, enhancing bio-defense capabilities, and monitoring biotechnology research. Here, too, large language models can contribute by identifying malign activities. The models can also contribute to benign biological causes, such as aiding in the rapid development of vaccines, treatments, and cures.

As in other areas of national security, every countermeasure can lead to new measures from potentially malign users, requiring vigilance and adaptability yet again. Throughout this process, it will be essential that stakeholders at all levels—lawmakers, technologists, researchers, and the public—engage in this dialogue. All have a role in shaping the future of large language models and ensuring a balance between harnessing their benefits and mitigating their risks.

In seeking proactive responses to adversarial actions, red-teaming is an essential strategy. This approach involves simulating potential adversaries’ tactics, which allows defenders to identify vulnerabilities and prepare for potential threats effectively. By integrating red-teaming into the development and assessment of large language models, stakeholders can better anticipate misuse scenarios and formulate suitable countermeasures, thus contributing to a more resilient AI ecosystem.

As the United States progresses through this era of rapid digital advancement, it is crucial to recognize that transformative technologies should not be abandoned due to the risks they pose. Rather, the focus should be on understanding the threats, implementing necessary countermeasures, and continuing to utilize the tools in a safer and more secure manner. The dual role of large language models as both measure and countermeasure highlights the complexity of the challenge while also providing insight into effective management. Acknowledging the measure-countermeasure dynamic offers a valuable framework for addressing challenges, enabling the exploitation of technological innovation, and enhancing national security.

Embracing the sequential robustness framework ensures that our future is not solely determined by today’s developments and decisions. Instead, it will evolve through a series of choices made over time, each informed by updated data and shifting contexts, facilitating risk mitigation and maximizing previously unforeseeable benefits.

Christopher Mouton is a senior engineer at the nonprofit, nonpartisan RAND Corporation and a professor at the Pardee RAND Graduate School.

Caleb Lucas is an associate political scientist at RAND.

Image: Shutterstock.