Today’s rapid advances in artificial intelligence and machine learning present a range of challenges and opportunities for the United States. Increasingly, U.S., Chinese, and Russian leaders recognize AI as a strategic technology that could become a critical determinant of future national competitiveness.1 AI/ML may be poised to transform not only our economies and societies, but also the character of conflict.2 The military applications of these technologies have generated particular concerns and exuberant expectations, including predictions that the advent of AI in military affairs could change the very nature of warfare.3 Undeniably, AI has become a focus in military competition among the great powers,4 with the potential to reshape international competition and undermine deterrence.5
In this new era, competition in AI technology and applications has emerged as a source of friction and created potential flashpoints. Andrew Imbrie and Elsa Kania
In this policy brief, we present and evaluate several measures in AI safety and security that could prove feasible and mutually beneficial for future bilateral and multilateral interactions. These measures are intended to prevent or correct misperceptions, enhance mutual transparency on policies and capabilities, and contribute to providing safeguards against inadvertent escalation. By pursuing such initiatives in the near term, the United States can improve its capacity to leverage the benefits of AI, while mitigating the risks and managing the shifting terrain in U.S.-China relations and between the United States, China, and Russia.
American strategy has reoriented toward great power rivalry, recognizing China and Russia as competitors that present a strategic challenge to the United States and its allies and partners worldwide.6 This new direction demands creative thinking and solutions for complex policy issues. Any coherent framework for U.S. strategy must include policies to promote American innovation, protect sensitive technologies, and manage the security and reliability of supply chains. Great power rivalry may entail sharper competition in areas where U.S. values and interests directly conflict with those of Beijing and Moscow, but it equally requires constructive approaches to enhancing American competitiveness and pursuing selective, pragmatic, and carefully calibrated engagement on issues of mutual concern. In this new era, competition in AI technology and applications has emerged as a source of friction and created potential flashpoints. This initial assessment of AI safety and security concerns illustrates one critical component of a long-term, comprehensive, and sustainable approach.
The dynamics of AI research are open and often collaborative, but the emerging discourse around AI has been growing increasingly fractured and competitive. The notion that an “AI arms race” is underway could exacerbate the challenges and misrepresent a range of emerging technologies that present complex and uncertain implications for the future of strategic stability.7 Simply put, machine learning involves a set of interrelated techniques that enable military capabilities, but these techniques do not themselves constitute weapons systems.8 At times, military and political leaders have demonstrated more enthusiasm for AI/ML applications than awareness of the full range of risks and security concerns that could arise with the deployment of such nascent, relatively unproven technologies.9 For instance, Russia is reportedly developing and planning to deploy by 2027 the “Poseidon,” an underwater drone that will be armed with a nuclear warhead and capable of navigating autonomously.10
AI systems remain vulnerable to attacks, from the deliberate corruption of data and cyber exploitation to the manipulation of brittleness or idiosyncrasies in algorithms.
The challenges are acute and especially concerning, given the possibility of military powers rushing to deploy AI/ML-enabled systems that are unsafe, untested, or unreliable in an effort to gain a comparative advantage. Chief among the risks are failures, accidents, or unexpected emergent behaviors in AI systems that can exhibit unpredictable outcomes in real-world settings.11 For military organizations, bureaucratic hurdles and the challenges of testing and assurance may slow adoption of these emerging capabilities, but the risks of accidents or adversarial interference cannot be discounted. Human- machine interactions will also create novel vectors of risk in the operation of automated or semi-autonomous systems.12 AI systems remain vulnerable to attacks, from the deliberate corruption of data and cyber exploitation to the manipulation of brittleness or idiosyncrasies in algorithms.13
The rapid progress in dual-purpose research and applications in AI will heighten shared challenges to, and could worsen, relations among great powers. Following their deployment, interactions among AI systems could prove unpredictable in ways that intensify the risks of inadvertent escalation. Beyond the purview of nation-states, the diffusion of these technologies could empower non-state actors, from criminals to terrorist organizations, and present new security threats.14 Most professional militaries are likely to operate in a manner generally consistent with the laws of war,15 including the requirement for Article 36 review of new weapons systems to ensure their compliance with the Geneva Convention.16 By contrast, non-state actors could be uniquely empowered by the diffusion of emerging technologies— and unlikely to adhere to the same principles or parameters.
Given these concerns, there are compelling reasons to promote measures that enhance the safety, surety, and security of AI systems in military affairs. There are also difficult policy trade-offs involved. On the one hand, collaboration in AI safety and security can reduce the risks of accident and strategic miscalculations among great powers. On the other hand, such collaboration may improve the reliability of machine learning techniques and therefore enable strategic competitors to deploy AI/ML-enabled military systems more quickly and effectively. Evaluating the sensitivity of various countries to issues of safety, reliability, and assurance when fielding weapons systems is beyond the scope of this paper, but merits further analytic attention. Nevertheless, any effort to promote collaboration in AI safety and security will need to balance the potential benefits against the range of possible costs.
AI Safety, Security, and Stability Among Great Powers
Download Full Report- U.S. and Chinese AI plans, policies, and leadership statements have consistently emphasized this point. See, e.g., State Council Notice on the Issuance of the New Generation AI Development Plan” [国务院关于印发新一代人工智能发展规划的通知], July 20, 2017, http://www.gov.cn/zhengce/content/2017- 07/20/content_5211996.htm; Graham Webster, Rogier Creemers, Paul Triolo, and Elsa Kania, “Full Translation: China’s ‘New Generation Artificial Intelligence Development Plan’ (2017),” New America, https://www.newamerica.org/cybersecurity- initiative/digichina/blog/full-translation-chinas-new-generation-artificial-intelligence- development-plan-2017/; White House, “China’s New Executive Order on Maintaining American Leadership in Artificial Intelligence,” February 11, 2019, https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american- leadership-artificial-intelligence/; “Xi Jinping: Promote the Healthy Development of Our Nation’s New Generation of Artificial Intelligence” [习近平:推动我国新一代人工智能 健康发展], Xinhua, October 31, 2018, http://www.xinhuanet.com/politics/2018- 10/31/c_1123643321.htm. For an English translation, see Elsa Kania and Rogier Creemers, “Xi Jinping Calls for ‘Healthy Development’ of AI (Translation),” New America, November 5, 2018, https://www.newamerica.org/cybersecurity- initiative/digichina/blog/xi-jinping-calls-for-healthy-development-of-ai-translation/.
- Klaus Schwab, The Fourth Industrial Revolution (Geneva: World Economic Forum, 2016).
- Some scholars and policymakers, including former U.S. Secretary of Defense James Mattis, have posited that AI could alter not only the character but even the nature of warfare, while other observers remain more skeptical that AI will prove that transformative. See “Artificial intelligence poses questions for nature of war: Mattis,” Phys.org, February 18, 2018, https://phys.org/news/2018-02-artificial-intelligence-poses-nature-war.html. For an academic perspective on the topic, see Frank G. Hoffman, “Will War’s Nature Change in the Seventh Military Revolution?” Parameters 47, no. 4 (2017): 19-31.
- Michael C. Horowitz, Gregory Allen, Elsa Kania, and Paul Scharre, “Strategic Competition in an Era of Artificial Intelligence,” Center for New American Security, August 2018.
- Michael C. Horowitz, “Artificial Intelligence, International Competition, and the Balance of Power,” Texas National Security Review, May 2018.
- See, e.g., “National Security Strategy of the United States of America,” December 2017, https://www.whitehouse.gov/wp-content/uploads/2017/12/NSS-Final-12-18-2017- 0905.pdf.
- See, e.g., Todd S. Sechser, Neil Narang, and Caitlin Talmadge, “Emerging technologies and strategic stability in peacetime, crisis, and war,” Journal of Strategic Studies 42, no. 6 (2019): 727-735. There has been a range of articles written either describing or debunking the notion that there is an “AI arms race” underway. See, e.g., Edward Moore Geist, “It’s already too late to stop the AI arms race—We must manage it instead,” Bulletin of the Atomic Scientists 72, no. 5 (2016): 318-321; Elsa B. Kania, “The Pursuit of AI Is More Than an Arms Race,” Defense One, April 18, 2018, https://www.defenseone.com/ideas/2018/04/pursuit-ai-more-arms-race/147579/; Remco Zwetsloot, Helen Toner, and Jeffrey Ding, “Beyond the AI Arms Race: America, China, and the Dangers of Zero-Sum Thinking,” Foreign Affairs, November 16, 2018, https://www.foreignaffairs.com/reviews/review-essay/2018-11-16/beyond-ai-arms-race; Heather Roff, “The frame problem: The AI “arms race” isn’t one,” Bulletin of the Atomic Scientists, April 29, 2019, https://thebulletin.org/2019/04/the-frame-problem-the-ai- arms-race-isnt-one/.
- Michael C. Horowitz, “Artificial Intelligence, International Competition, and the Balance of Power,” Texas National Security Review, May 2018.
- “Drones, robots, lasers, supersonic gliders & other high-tech arms: Putin wants Russian military to be up to any future challenge,” Russia Today, November 22, 2019, https://www.rt.com/russia/474119-putin-laser-drone-robot-hypersonic/.
- Amanda Macais, “Russia’s nuclear-armed underwater drone may be ready for war in eight years,” CNBC, March 25, 2019, https://www.cnbc.com/2019/03/25/russias- nuclear-armed-underwater-drone-may-be-ready-for-war-in-2027.html.
- Paul Scharre, “Killer Apps: The Real Dangers of an AI Arms Race,” Foreign Affairs, 98 (2019): 135.
- John Hawley, “Patriot Wars,” Center for a New American Security, January 2017.
- See, e.g., race; Huang Ling, Anthony D. Joseph, Blaine Nelson, Benjamin I.P. Rubinstein, and J. Doug Tygar, “Adversarial machine learning,” in Proceedings of the 4th ACM workshop on Security and Artificial Intelligence, ACM (2011): 43-58; Wieland Brendel, Jonas Rauber, and Matthias Bethge, “Decision-based adversarial attacks: Reliable attacks against black-box machine learning models,” arXiv preprint arXiv:1712.04248 (2017); Dong Yinpeng, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li, “Boosting adversarial attacks with momentum,” in Proceedings of the IEEE conference on computer vision and pattern recognition (2018): 9185-9193.
- Audrey K. Cronin, Power to the People: How Open Technological Innovation is Arming Tomorrow’s Terrorists (Oxford: Oxford University Press, 2019).
- For one example of Chinese military writings on the topic available in English, see Jian Zhou, Fundamentals of Military Law: A Chinese Perspective (New York: Springer, 2019).
- See “Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts” (Protocol I), 8 June 1977, https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/WebART/470-750045. For an analysis of the challenges of applying this process to weapons systems with increasing autonomy, see Vincent Boulanin, Implementing Article 36 Weapon Reviews in the Light of Increasing Autonomy in Weapon Systems (Stockholm International Peace Research Institute [Stockholms internationella fredsforskningsinstitut]: SIPRI, 2015).