Executive Summary
The Department of Defense aims to harness AI to support and protect U.S. servicemembers, safeguard U.S. citizens, defend U.S. allies, and improve the affordability, effectiveness, and speed of U.S. military operations.1 To advance this strategy, the U.S. military is developing plans to use AI in physical systems—in the air, on the ground, underwater, and in space—as well as to operate virtually in cyber operations and electronic warfare. Policymakers need information about DOD’s investments in AI to ensure these research efforts support broader strategic goals. Where exactly is this investment going? And what benefits and risks might result from developing and fielding autonomous and AI-enabled weapons and systems?
As the U.S. defense community implements its vision for AI, CSET offers a two-part analysis assessing the scope and implications of U.S. military investments in autonomy and AI. Drawing on publicly available budgetary data on DOD’s science and technology program and an extensive review of strategic and operational literature and scientific research, these studies focus on three interconnected elements that form our analytical framework:
• The technology element addresses DOD research and development efforts in autonomy and AI;
• The military capabilities element speaks to the speed, precision, coordination, reach, persistence, lethality, and endurance enabled by advances in autonomy and AI;
• The strategic effects element analyzes how these technological developments and capability enhancements may affect key strategic issues—specifically, deterrence, military effectiveness, and interoperability with allies.
This report presents a strategic assessment encompassing the military capabilities and strategic effects elements, while the accompanying report, “U.S. Military Investments in Autonomy and AI: A Budgetary Assessment,” covers the technology element. The following is a summary of our findings and recommendations for ensuring U.S. military leadership in AI in the short term and the long term.2
AI in the short term
Filling knowledge gaps: Effective human-machine teaming is at the core of applications that facilitate coordination and increase military endurance—in some situations, even mitigating the risks from machine complexity and speed. But gaps in research on trust in human-machine teams can stymie progress.
The U.S. military sees many benefits in pairing humans with intelligent technologies—improving coordination and decision-making, helping reduce the cognitive and physical burden on warfighters, and decreasing exposure to dangerous missions. The basic technologies that support human-machine collaboration already exist. Yet advances in AI could allow intelligent machines to function not only as tools that facilitate human actions but as trusted partners to human operators. Such breakthroughs could provide U.S. forces with a significant technological and operational advantage against opponents without similar capabilities.
Trust is essential for effective human-machine teaming. Yet research on human attitudes toward technology identifies tendencies for both mistrust and overtrust, with each presenting unique risks and challenges for military uses of human-machine teams. Without a contextualizing understanding of trust in human-machine teams, the U.S. military may not be able to fully capitalize on the advantages in speed, coordination, and endurance promised by autonomy and AI. This, in turn, could impede U.S. ability to use AI-enabled systems to deter adversaries from aggression, operate effectively on future battlefields, and ensure interoperability with allies.
To safely and effectively employ machines as trusted partners to human operators, the following steps may be necessary:
- DOD should increase investment in multidisciplinary research on the drivers of trust in human-machine teams—specifically under operational conditions and in different domains—by varying stress conditions and accounting for relevant dispositional, situational, and learned factors.
- U.S.-based researchers should collaborate with defense research communities in allied countries on joint research initiatives that assess how cross-cultural variation in trust in human-machine teams may impact interoperability.
Maximizing advantages: AI applications that enhance military endurance contribute to military effectiveness and readiness, as well as interoperability with allies.
Today’s strategic and operational realities put a premium on both operational readiness—supported by elements such as personnel, equipment, supply/maintenance, and training—and endurance—the ability to withstand hostile actions and adverse environmental conditions long enough to achieve mission objectives.3 There are great opportunities in leveraging existing and relatively safe technologies for logistics and sustainment to streamline personnel management and enhance the functionality and longevity of military equipment. But their potential value is understated.
- AI applications for logistics and sustainment are more than cost saving measures boosting back-office efficiency; they enable military readiness and effectiveness in combat.
- Gaps in endurance capabilities can impair interoperability in multinational coalitions like NATO and undermine the long-term health of U.S. alliances.
Therefore, we offer the following policy recommendations:
- DOD, with coordination support from the Joint Artificial Intelligence Center, should calibrate investment in AI applications for endurance as an enabler of military readiness.
- JAIC should work with the centralized AI service organizations to assess the impact of AI programs related to logistics and sustainment on overall readiness.
- The United States should work closely with allies on AI applications in logistics and sustainment, including joint research and development programs and support for multinational public-private sector partnerships.
Minimizing risks: In the short term, the national security risks of AI have less to do with AI replacing humans and more to do with failure to deliver on technical expectations and with warfighters inexperienced with AI misusing it.
Many of the current U.S. military autonomy and AI research and development projects will never reach fruition; others will fail to scale or be fielded yet rarely used. There are significant technological, organizational, and budgetary barriers to innovation and adoption of new military technologies. As such, a healthy degree of skepticism and tolerance for technical failure are needed.
At the same time, there is also the risk of over-eager adoption. This could result in premature use of AI systems that cannot grasp context or make strategically intricate judgments by warfighters who may not fully understand the potential failure modes of these systems, possibly leading to inadvertent escalation. For instance, employing increasingly autonomous unmanned surface vehicles for military deception, while technically possible, could be destabilizing in highly militarized areas such as the South China Sea. As a result, we recommend the following:
- Security and technology researchers, particularly those affiliated with or advising DOD, should be more explicit about the uncertain pace of progress in specific areas related to autonomy and AI technologies (e.g., autonomous ground combat vehicles and unmanned undersea vehicles).
- The same researchers should differentiate between two types of risks: short-term risks associated with the use and misuse of technologies already in the pipeline, and long-term risks arising from more advanced technologies which are likely to face development and fielding barriers.
- DOD should coordinate with federally funded research and development centers and university-affiliated research centers to conduct risk assessments and wargames focused explicitly on near-term AI technologies and risks from human-machine interactions.
- JAIC should coordinate with the services and relevant operational commanders to provide thorough training to units operating autonomous and/or AI-enabled systems on the potential failure modes and corresponding risks.
AI in the long term
Robust, resilient, trustworthy, and secure AI systems are key to ensuring long-term military, technological, and strategic advantages.
Numerous science and technology programs focus explicitly on strengthening AI robustness and resilience, fortifying security in the face of deceptive and adversarial attacks, and developing systems to behave reasonably and reliably in operational settings (DARPA’s “AI Next Campaign” is a prominent example). Yet in our assessment, most S&T programs on autonomy and AI don’t mention safety and security attributes. Failure to advance reliable, trustworthy, and resilient AI systems could adversely affect deterrence, military effectiveness, and interoperability:
- Deterrence: The unpredictability and opaqueness of current AI technologies amplify risks of unintended escalation due to the speed of autonomous systems, as well as miscalculations and misperceptions surrounding their potential use.
- Military Effectiveness: AI applications that improve situational awareness, decision-making, and targeting processes can enhance precision capabilities and operational effectiveness. But the need for new data input and validation of AI systems deployed in uncertain operational environments raises concerns about unpredictable behavior.
- Interoperability: Safety, security, and privacy concerns could stall progress toward AI-enabled coordination between the United States and its allies and undermine the effectiveness of coalitions like NATO.
Based on these findings, we propose the following policy recommendations:
- DOD research, development, testing, and evaluation programs should emphasize safety and security across all stages of the AI system lifecycle—from initial design to data/model building, verification, validation, deployment, operation, and monitoring.
- DOD should collaborate with private sector leaders on safety research in areas such as automated and autonomous driving systems, while prioritizing robustness and resilience research in areas understudied in the private sector.
- DOD should focus on traceability for assurance with ML systems that continue to learn on dynamic inputs in real-time.
- The United States should collaborate with allies on common standards for safety and security of AI systems, including AI-enabled safety-critical systems.
- The United States should pursue opportunities for collaboration with China and Russia on AI safety and maintain crisis communications protocols to reduce the risk of escalation.
U.S. Military Investments in Autonomy and AI: A Strategic Assessment
Download Full Policy Brief- U.S. Department of Defense, Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity (Washington, DC: Department of Defense, 2018), https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF.
- Some of the recommendations in this report were also articulated in a report published jointly by the Bipartisan Policy Center and CSET. See Bipartisan Policy Center and the Center for Security and Emerging Technology, Artificial Intelligence and National Security, (Washington, DC: BPC, CSET, June 2020), https://bipartisanpolicy.org/wp-content/uploads/2020/07/BPC-Artificial-Intelligence-and-National-Security_Brief-Final-1.pdf.
- Congressional Research Service, Defining Readiness: Background and Issues for Congress (Washington, DC: CRS, June 2017), https://fas.org/sgp/crs/natsec/R44867.pdf.