Wyatt Hoffman was an Emerging Technology Policy Fellow at the U.S. Department of State, where he is sponsored by the Center for Security and Emerging Technology’s (CSET) State Department Fellowship. He previously worked as a Research Fellow on CSET’s CyberAI Project. Prior to that, he was a senior research analyst with the Cyber Policy Initiative at the Carnegie Endowment for International Peace, where his work focused on cyber strategy, the role of the private sector in cybersecurity and the intersection of nuclear weapons and cybersecurity. Wyatt holds an M.A. in War Studies from King’s College London, where he was a Rotary Global Grant Scholar in Peace and Conflict Prevention and Resolution. He earned a B.A. in political science from Truman State University.
Related Content
Militaries seek to harness artificial intelligence for decision advantage. Yet AI systems introduce a new source of uncertainty in the likelihood of technical failures. Such failures could interact with strategic and human factors in ways that lead to miscalculation and escalation in a crisis or conflict. Harnessing AI effectively requires managing these risk trade-offs by reducing the likelihood, and containing the consequences of, AI failures.
Like traditional software, vulnerabilities in machine learning software can lead to sabotage or information leakages. Also like traditional software, sharing information about vulnerabilities helps defenders protect their systems and helps attackers exploit them. This brief examines some of the key differences between vulnerabilities in traditional and machine learning systems and how those differences can affect the vulnerability disclosure and remediation processes.
Artificial intelligence will play an increasingly important role in cyber defense, but vulnerabilities in AI systems call into question their reliability in the face of evolving offensive campaigns. Because securing AI systems can require trade-offs based on the types of threats, defenders are often caught in a constant balancing act. This report explores the challenges in AI security and their implications for deploying AI-enabled cyber defenses at scale.
As states turn to AI to gain an edge in cyber competition, it will change the cat-and-mouse game between cyber attackers and defenders. Embracing machine learning systems for cyber defense could drive more aggressive and destabilizing engagements between states. Wyatt Hoffman writes that cyber competition already has the ingredients needed for escalation to real-world violence, even if these ingredients have yet to come together in the right conditions.