The Department of Defense wants to harness AI-enabled tools and systems to support and protect U.S. servicemembers, defend U.S. allies, and improve the affordability, effectiveness, and speed of U.S. military operations.1 Ultimately, all AI systems that are being developed to complement and augment human intelligence and capabilities will have an element of human-AI interaction.2 The U.S. military’s vision for human-machine teaming, however, entails using intelligent machines not only as tools that facilitate human action but as trusted partners to human operators.
By pairing humans with machines, the U.S. military aims to both mitigate the risks from unchecked machine autonomy and capitalize on inherent human strengths such as contextualized judgement and creative problem solving.3 There are, however, open questions about human trust and intelligent technologies in high-risk settings: What drives trust in human-machine teams? What are the risks from breakdowns in trust between humans and machines or alternatively, from uncritical and excessive trust? And how should AI systems be designed to ensure that humans can rely on them, especially in safety-critical situations?
This issue brief summarizes different perspectives on the role of trust in human-machine teams, analyzes efforts and challenges to building trustworthy AI systems, and assesses trends and gaps in relevant U.S. military research. Trust is a complex and multi-dimensional concept, but in essence, it refers to the human’s confidence in the reliability of the system’s conclusions and its ability to accomplish defined tasks and goals. Research on trust in technology cuts across many fields and academic disciplines. But for the defense research community, understanding the nature and effects of trust in human-machine teams is necessary for ensuring that the autonomous and AI-enabled systems the U.S. military develops are used in a safe, secure, effective, and ethical way.
While the outstanding questions regarding trust apply to a broad set of AI technologies, we pay particularly close attention to machine learning systems, which are capable not only of detecting patterns but also learning and making predictions from data without being explicitly programmed to do so.4 Over the past two decades, advances in ML have vastly expanded the realm of what is possible in human-machine teaming. But the increasing complexity and unique vulnerabilities of ML systems, as well as their ability to learn and adapt to changing environments, also raise new concerns about ensuring appropriate trust in human-machine teams.
With that, our key takeaways are:
- Human trust in technology is an attitude shaped by a confluence of rational and emotional factors, demographic attributes and personality traits, past experiences, and the situation at hand. Different organizational, political, and social systems and cultures also impact how people interact with technology, including their trust and reliance on intelligent systems.
- That said, trust is a complex, multidimensional concept that can be abstract, subjective, and difficult to measure.
- Much of the research on human-machine trust examines human interactions with automated systems or more traditional expert systems; there is notably less work on trust in autonomous systems and/or AI.
- Defense research has focused less on studying trust in human-machine teams directly and more on technological solutions that “build trust into the system” by enhancing system functions and features like transparency, explainability, auditability, reliability, robustness, and responsiveness.
- Such technological advances are necessary, but not sufficient, for the development and proper calibration of trust in human-machine teams.
- Systems engineering solutions should be complemented by research on human attitudes toward technology, accounting for the differences in people’s perceptions and experiences, as well as the dynamic and changing environments where human-machine teams may be employed.
- To advance the U.S. military vision of using intelligent machines as trusted partners to human operators, future research directions should continue and expand on:
- Research and experimentation under operational conditions,
- Collaborative research with allied countries,
- Research on trust and various aspects of transparency,
- Research on the intersection of explainability and reliability,
- Research on trust and cognitive workloads,
- Research on trust and uncertainty, and
- Research on trust, reliability, and robustness.
Human-machine teaming is, most basically, a relationship. And like with any other relationship, understanding human-machine teaming requires us to pay attention to three sets of factors—those focused on the human, the machine, and the interactions—all of which are inherently intertwined, affecting each other and shaping trust. For the defense research community, insights from research on human attitudes toward technology and the interactions and interdependencies between humans and technology can strengthen and refine systems engineering approaches to building trustworthy AI systems. Ultimately, human-machine teaming is key to realizing the full promise of AI for strengthening U.S. military capabilities and furthering America’s strategic objectives. But the key to effective human-machine teaming is a comprehensive and holistic understanding of trust.
Download Full Issue Brief
Trusted Partners- U.S. Department of Defense, Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity (Washington, DC: Department of Defense, 2018), https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF.
- National Security Commission on Artificial Intelligence, Key Considerations for Responsible Development and Fielding of Artificial Intelligence, (Washington, DC: July 2020), 30, https://www.nscai.gov/reports.
- John D. Winkler, Timothy Marler, Marek N. Posard, Raphael S. Cohen, and Meagan L. Smith, “Reflections on the Future of Warfare and Implications for Personnel Policies of the U.S. Department of Defense” (RAND Corporation, 2019), https://www.rand.org/content/dam/rand/pubs/perspectives/PE300/PE324/RAND_PE324.pdf.
- Andrew Ilachinski, “AI, Robots, and Swarms” (Center for Naval Analysis, January 2017), 49, https://www.cna.org/cna_files/pdf/DRM-2017-U-014796-Final.pdf.