How America Can Reinvent Its Approach to Technology Innovation

How America Can Reinvent Its Approach to Technology Innovation

While America has been losing its edge in technological innovation, this loss is not an inevitability.

In 1954, scientists at Bell Labs in the United States invented the first silicon solar panel. By 1978, American firms produced over 95 percent of the global solar market. Yet despite this initial dominance, American firms only produced a paltry 6 percent by 2021. Instead, it is China that controls 70 percent of global production. A similar story can be seen with hypersonic missiles: the technology was initially developed in America in the 1960s, but currently, America has “catching up to do very quickly.” This sort of situation is so common, in fact, that China has a lead in thirty-seven out of forty-four major emerging technologies, according to a report by the Australian Strategic Policy Institute.

Despite the United States continuing to spend the most on research and development (R&D) of any nation, the United States is lagging behind in spearheading new technologies. The issue isn’t a lack of R&D spending but rather an inability to implement new technologies or maintain a market edge over other nations. In other words, we are still the greatest innovators in the world, but we cannot successfully commercialize our innovations. The major reasons for this are a shift away from industrial policy to science policy, industry consolidation, and a lack of financing for small and medium enterprises. If we wish to correct course, it is necessary to look at the history of R&D in the United States.

During the 1950s and 1960s, the U.S. federal government, particularly the Department of Defense (DoD), played an active role in fostering innovation by being the “first buyer” of many new technologies and encouraging technology-sharing between firms. For example, the first market for transistors was NASA, which bought every transistor in the world in 1962 for the Apollo missions. More recently, NASA used a similar method in its commercial orbital transportation service program (COTS) program, which encourages commercial spaceflight by buying cargo and crew transporters for the International Space Station. One major success of this program has been SpaceX, whose first major success was developing the Falcon 1 for a COTS contract in 2006, demonstrating that the concept is just as viable today as it was in the 1960s. Additionally, the DoD often facilitated knowledge sharing between firms and researchers, especially by using second source contracts—i.e., contracts that stipulated that any new technology purchased by DoD would have to be produced by at least two firms—creating redundancy in the supply chain.

Meanwhile, the majority of research was performed by large corporate laboratories rather than academia—in the 1960s DuPont produced more patents than Caltech or MIT combined. This, combined with an already massive industrial base, allowed the United States to retain a technological edge by rapidly creating a new market for a technology and quickly creating an ecosystem of suppliers. After the initial creation of the market, long-term commercialization and competitiveness were more or less left to the market. Since the United States had a near monopoly on many high-tech products such as semiconductors and solar panels, there was little need for government intervention. However, this created a period of complacency in the 1970s that was quickly ended by foreign competition from Japan in the 1980s.

The competition caused the U.S. government to shift predominantly towards “Science Policy,” wherein academia would provide the bulk of the research, and this research would primarily focus on basic sciences with no immediate application. Essentially, the cost of basic R&D was offset from the company level to the government. Meanwhile, large companies consolidated supply chains, and the implementation of new technology would be left to small firms with little guidance from the government. This approach did initially work in certain sectors. For example, America actually regained dominance in semiconductors in the 1990s. However, it failed in the long term. As of 2021, Intel was responsible for 19 percent of global semiconductor R&D spending but still lost the bleeding edge in chip processes to TSMC and Samsung. The same thing happened in solar panel manufacturing as well: despite the United States outspending Japan in R&D in every year except one from 1980 to 2001, the United States still lost its market share. The focus on efficiency, in short, worked too well. The consolidation in technology supply chains made it difficult for companies to adopt new innovations since it became impossible for smaller firms to test new process improvements and “move them up the chain.” Additionally, the focus on basic research alone meant that rapid commercialization took a backseat, allowing other nations to establish first-mover advantage and maintain it by iterating on already commercialized technology.

From these failures, it can be ascertained that if the United States wants to regain its lead, it will need to shift its research policy back towards having the state to encourage the commercialization of new technology, along with intentionally creating redundancy in supply chains to sustain innovation. However, Washington must go further than either disorganized disbursing of one-time grants or a de facto focus for the DoD. Instead, commercialization should be as focused and institutionalized to the same degree as basic research is today with organizations such as the National Science Foundation.

A good example to emulate in this regard is Germany’s Fraunhofer Society. Founded in 1949, the organization focuses on bridging the gap between research and industry by connecting academics with companies and venture capitalists, or VCs, while funding the scale-up of technology that is too risky for VC firms. This is accomplished through bilateral contract research (a company hiring the institute for a specific research task), spin-off companies founded by Fraunhofer staff, licensing technology to companies, transferring personnel to industry, and “innovation clusters,” where different companies are brought together to establish common standards or otherwise coordinate for mutual benefit. Importantly, 70 percent of the Fraunhofer Society’s funding is generated through industry contracts, IP revenue, or public research. This encourages the organization to be dynamic and entrepreneurial in how they approach problems. A similar approach would work well in the United States—saving taxpayer dollars and attracting talent from both academia and the VC world.

It's worth noting that the Fraunhofer Society already has a branch in the United States and is regarded as “an indispensable promoter of scientific exchange between the USA and Germany.” The process of creating a similar institute for the United States is less daunting of a task than one might imagine, since the U.S. government can consult, acquire personnel, and gain expertise from the American branch with relative ease. Such a policy would also carry the additional benefit of improving relations between Washington and Berlin.

While the United States has been losing its edge in technological innovation, this loss is not an inevitability. By creating an institution for bridging the gap between basic scientific research and commercialization by the private sector, the United States can regain dominance while greatly benefiting the public by allowing for more cutting-edge technology to make it to store shelves. There is already a good “template” for such a system in the form of the Fraunhofer Society in Germany, alongside an existing presence in the United States, so it should be a high priority for America’s science policymakers to implement the model here.

Siddhartha Kazi is an undergraduate student studying Industrial Engineering at Texas A&M University. He has written for The National Interest.

Image: Shutterstock.