Executive Summary
National security leaders identify artificial intelligence as a priority technology for defending the United States. Today’s investments in research and development across the federal government, academia, and industry will help secure America’s global leadership in AI. Investments in defense research in particular will provide the U.S. military with the AI-enabled capabilities needed to deter adversaries from aggression, fight and win the wars of the future, and cooperate effectively with allies. But where exactly is this investment going? And what benefits and risks might result from developing and fielding autonomous and AI-enabled weapons and systems?
Policymakers need information about the Department of Defense’s investments in AI for proper oversight and to ensure these research efforts support broader strategic goals. As the U.S. defense community implements its vision for AI, CSET offers a two-part analysis assessing the scope and implications of U.S. military investments in autonomy and AI.
Drawing on publicly available budgetary data on DOD’s science and technology (S&T) program and an extensive review of strategic and operational literature and scientific research, these studies focus on three interconnected elements that form our analytical framework:
- The technology element addresses DOD research and development efforts in autonomy and AI;
- The military capabilities element speaks to the speed, precision, coordination, reach, persistence, lethality, and endurance enabled by advances in autonomy and AI;
- The strategic effects element analyzes how these technological developments and capability enhancements may affect key strategic issues—specifically, deterrence, military effectiveness, and interoperability with allies.
This report centers on the technology element, while the accompanying report, “U.S. Military Investments in Autonomy and AI: A Strategic Assessment,” covers the military capabilities and strategic effects portions. The following is a summary of our findings regarding current DOD research investment priorities, trends, and gaps with corresponding recommendations.1
Current DOD Research Investments: Trends and Gaps
The U.S. military has a wide range of research programs using autonomy and AI in unmanned vehicles and systems, information processing, decision support, targeting functions, and other areas. Yet there are gaps in research on AI not related to autonomy and in investments in basic AI research.
Our results show that estimates of research investments vary depending on definitions and measures. Across the different measurements, however, the data suggests that:
- AI research unrelated to autonomy—and especially autonomy in unmanned systems—receives a relatively small share of the S&T funds directed toward autonomy and AI research.
- More than two-thirds of the funds allocated to AI-related science and technology research are also related to autonomy.
- In contrast, less than a third of the funds allocated to autonomy-related research were related to AI.
- Investments in basic AI research are also likely smaller than initially estimated.
- Three programs—DARPA’s “Defense Research Sciences Program,” the Air Force’s “University Research Initiatives,” and the Navy’s “Defense Research Sciences” program—account for about 80 percent of the funds allocated to basic AI research. The estimated program-level costs of these initiatives, however, are highly inflated because they include the costs of research projects unrelated to AI.
The ambiguity about the nature and scope of U.S. military investments in autonomy and AI research makes it difficult to ensure oversight. Moreover, the current U.S. military research on AI may not be sufficiently innovative to fuel the scientific breakthroughs needed to ensure long-term advantage. As such, we offer the following policy recommendations:
- DOD should provide greater clarity about overall funding levels for autonomy and AI, overlap between funding allocated to autonomy research and AI research, and funding for AI-related basic research.
- DOD should leverage its relationships with university-affiliated research centers and national labs to map the landscape of non-autonomy related AI and potential military applications, and identify opportunities for additional investment.
Effective human-machine collaboration is key to harnessing the full promise of AI. But gaps in our understanding of trust in human-machine teams can impede progress.
The U.S. military sees many benefits to pairing humans with intelligent technologies and our analysis finds that human-machine collaboration is a crosscutting theme across the different autonomy and AI research programs. The following issues therefore merit attention:
- Trust is essential to human-machine collaboration. Yet in our assessment, few autonomy and AI related research initiatives reference both trust and human-machine collaboration.
- Only 18 out of the 789 research components related to autonomy and 11 out of the 287 research components related to AI mention the word “trust.”
- Gaps in research on the role of trust in human-machine teams can negate the advantages in speed, coordination, and endurance promised by autonomy and AI. This, in turn, could impede U.S. ability to use AI-enabled systems to deter adversaries from aggression, operate effectively on future battlefields, and ensure interoperability with allies.
To safely and effectively employ machines as trusted partners to human operators, the following steps may be necessary:
- DOD should increase investment in multidisciplinary research on the drivers of trust in human-machine teams, specifically under operational conditions.
- DOD should assess the advantages of making trust a consistent theme across autonomy and AI research programs pertaining to human-machine collaboration.
U.S. Military Investments in Autonomy and AI: A Budgetary Assessment
Download Full Policy Brief- Some of the recommendations in this report were also articulated in a report published jointly by the Bipartisan Policy Center and CSET. See Bipartisan Policy Center and the Center for Security and Emerging Technology, Artificial Intelligence and National Security, (Washington, DC: BPC, CSET, June 2020), https://bipartisanpolicy.org/wp-content/uploads/2020/07/BPC-Artificial-Intelligence-and-National-Security_Brief-Final-1.pdf.