Are We Due for an AI Winter?

Are We Due for an AI Winter?

AI technology is remarkable, but defense policymakers must be prepared for its possible stagnation.

Just over one year ago, I asked a simple question in The National Interest: What if machine learning—the most famous research paradigm in artificial intelligence (AI) today—is more limited than it seems? I expressed concern that the United States Department of Defense (DoD) could be unprepared for a slowdown or stagnation in AI development (an AI “winter”).

In the intervening period, state-of-the-art capabilities in subfields of AI, the most prominent among which is natural language processing, have dramatically changed. The rise of generative AI, powering applications such as OpenAI’s ChatGPT, Anthropic’s Claude, and Stability AI’s Stable Diffusion, has led to an endless stream of media coverage on the technology. Generative AI yields seemingly infinite applications; its development has produced a flood of invigorating research and a geopolitical “scramble” of age-defining proportions.

More than this, the U.S. government has seen significant AI-related transformations of its own making. The Chips and Science Act of August 2022 apportioned $280 billion over ten years for research and development in critical technologies, commercialization support, and semiconductor manufacturing on American soil. In October 2022, the U.S. government imposed an extensive array of export controls on Chinese firms, restricting the export of advanced semiconductor equipment and designs. These efforts have since expanded with limited outbound investment controls. The Chief Digital and Artificial Intelligence Office (CDAO) has matured, recently launching a task force meant to study the uses of generative AI by the DoD.

If one were keeping score, my concerns of an AI winter dulling American innovation might look decidedly unfounded in retrospect. But is this picture accurate? How seriously should the DoD take the threat of an AI slowdown or stagnation? These possibilities, save for some defense analysts like Paul Scharre, who acknowledge that AI development could “peter out,” have not been given adequate consideration.

Here, I provide a brief tour of AI’s history to explore these possibilities, examine generative AI’s prospects today, and then offer three recommendations for the DoD to mitigate over-reliance on this technology.

Rise, Fall, and Rise Again

AI is an oddball field. The field swings back and forth between “winters” and “summers,” indicating a rise and drop in funding, perhaps owing to an underappreciation of the difficulties researchers face. The founding research proposal for the field of AI in 1955, indeed, had luminaries like John McCarthy and Marvin Minsky, among others, writing: “We think that a significant advance can be made in one or more of these problems [language use, forming abstractions and concepts, solving problems, and self-improvement] if a carefully selected group of scientists work on it together for a summer.” “Overambitious” does not adequately describe the founding mindset of AI.

Since then, the field has seen cycles characterized by dominant research paradigms. The use of explicit rules hand-coded by human programmers dominated the first wave, Symbolic AI, hoping this approach would eventually capture the dynamism of human intelligence. Others, known as Connectionists, hoped that a system premised upon the structure of mammalian brains—in which artificial neural networks learn from data directly without explicit rules—would lead to human-like intelligence. But by the 1970s, funding dropped, and the field entered its first winter.

“Expert systems” triggered an AI Summer in the early 1980s by hand-coding discipline-specific knowledge into systems for trustworthy commercial deployment. This approach, however, crashed into specialized hardware obstacles and the comparatively more successful deployment of desktop computers, leading to Symbolic AI’s drop from the scene. The computing power that accompanied the rise of desktop computers aided the Connectionists, who used this power to refine artificial neural networks, leading banks to adopt an AI technique premised on this approach that used character recognition to process checks.

Still, the demands for data and computing power required by this emerging form of AI that trains deep neural networks led to its stagnation from the 1990s until the early 2010s, when the big data revolution and increased computing power led to deep learning’s reinvigoration.

The Past Decade as a Race to the Ceiling

The past ten years in AI are commonly described as a victory streak for deep learning, a subset of machine learning. But the abrupt AI “arms race” between American corporate giants, set off earlier this year by OpenAI’s ChatGPT, means that generative AI hype has “spiraled out of control.”

Generative AI hype shields uneven progress within the field and the technical roadblocks to achieving robust deployment of AI systems for sensitive operations. Whereas a typical debate on this matter may center on the opposing positions of generative AI systems being “intelligent” in a meaningful sense or, in contrast, being dead-ends on the road to “artificial general intelligence” or “human-level intelligence,” it need not take this form for defense purposes.

Instead, generative AI may be seen as an acceleration in an ongoing race to the ceiling of machine learning’s utility in emulating human cognitive abilities. It is simultaneously successful in narrow domains and a giant step closer to the limits of this approach, gobbling up the remaining fruits in machine learning’s garden.

U.S. Defense Must Stare Down the Commercial Hype

There is no better time for the U.S. defense establishment to confront the hype, especially given generative AI’s reliance on private-sector AI innovation. Advisory firm Gartner—known for its “hype cycle” of emerging technologies—recently categorized generative AI at the “peak of inflated expectations,” immediately before descending into the “trough of disillusionment” when it becomes critical for providers to deploy the technology in full awareness of its limitations.

There should be no illusions about the potential for existing generative AI systems and major corporate owners to overcome the following (inexhaustive) list of obstacles at a speed suitable for private investment: the tendency to hallucinate (inventing facts with a tone of authoritativeness); the unpredictability of one model version to the next; insufficiently tailored business models that capture the economic value of the generative AI application in question; incompatibilities between AI implementation methods and some companies’ data infrastructures; vulnerability to adversarial attacks that lack robust safeguards; and insufficient consumer interest in AI-integrated search engines.

Generative AI systems do not, as CDAO Craig Martell would have it, offer “five nines…of correctness. I cannot have a hallucination that says: ‘Oh yeah, put widget A connected to widget B’—and it blows up.”

While not all these obstacles are relevant to the DoD’s AI adoption efforts, its reliance on the private sector to act as the primary engine of continuous innovation makes them critical. In addition, while there is sufficient corporate interest in generative AI in the immediate term for the technology to survive, the DoD must nonetheless concern itself with a possible stagnation of the technology’s capabilities or slowdown in mission-applicable breakthroughs.

Consistent with my previous assessment, an AI winter is not guaranteed to occur within a reasonable timeframe. Some argue that the technology will improve over time, making its flaws invisible to consumers. I agree in principle, but this is a major assumption that neglects two critical facts: (1) Time is not unlimited, and the AI bubble could burst before limitations in generative AI are overcome; (2) Generative AI is already receiving a disproportionate share of AI funding across subfields. Generative AI in America exists in a dynamic commercial environment and will survive, thrive, or die primarily in that context.

What should be done by the DoD to mitigate over-reliance on generative AI?

First, the DoD should signal interest in emerging AI research paradigms, like causal and neurosymbolic AI.

Whereas before, I recommended that the DoD send clearer signals about the value of “minority voices in AI pursuing research agendas outside of machine learning,” this should now be firmer and more explicit: emerging AI paradigms, including causal AI and neurosymbolic AI, both identified by Gartner as occupying the “innovation trigger” stage of the hype cycle, ought to be sought out by high-level, agenda-setting organizations like the CDAO. These research paradigms, in part, draw inspiration from the successes of the Symbolic and Connectionist approaches to AI while adding new elements to recognize their shortcomings.

Furthermore, innovative combinations of techniques across subfields in AI (i.e., natural language processing, strategic reasoning, computer vision, etc.) should be prioritized. It is time to break the misleading association bound up in the acronym “AI/ML.”

Second, link AI adoption efforts to the needs of both AI and future warfare.

The operative assumption in much of the DoD’s recent efforts to adopt, field, and integrate AI technologies is that, whatever the flaws of the systems invested in today, they can be improved over time—that autonomous piloting agents will continue to improve in ways suited for the “collaborative combat aircraft” program, that uncrewed drones characteristic of the U.S. Navy’s Task Force 59 will continuously make for better eyes and ears at sea, and so on. Directly connected to this assumption is the DoD’s external-facing organizations like the Defense Innovation Unit (DIU) that link the public and private sectors, with concerns about unstable and insufficient funding as well as a difficult-to-navigate defense bureaucracy designed for traditional defense companies abounding.