CyberAI

As artificial intelligence introduces new risks, some potentially catastrophic or even existential, there is little data or detailed theory to assess them. Policymakers often resort to expert best guesses for the probability of doom but probability is not always the most appropriate tool, especially for the types of uncertainties in AI risk. This report details a brief introduction to Belief and Plausibility, which provides an alternative approach that is mathematically rigorous, uses familiar vocabulary, and only requires policymakers to ask two simple questions.

Washington, D.C. (April 30, 2026) — This morning, Andrew Lohn, Senior Fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), testified before the U.S.-China Economic and Security Review Commission.

Washington, D.C. (April 22, 2026) — This morning, Helen Toner, Interim Executive Director at Georgetown University’s Center for Security and Emerging Technology (CSET), testified before the U.S. Senate Committee on the Judiciary. The hearing, “Stealth Stealing: China’s Ongoing Theft of U.S. Innovation,” examined policy solutions to maintain U.S. technological leadership and strengthen U.S. intellectual property (IP) protections.

Full-Spectrum Propaganda in the Social Media Era

Josh A. Goldstein and Renée DiResta
| April 22, 2026

In a new Security Studies article, Renee DiResta and Josh A. Goldstein lay out how state-backed propagandists run “full-spectrum” propaganda campaigns, relying on overt and covert tools across broadcast and social media.

Organizations face growing pressure to adopt artificial intelligence, but often lack practical guidance on how to do so effectively. This report bridges the gap between high-level principles and real-world implementation, offering actionable steps across the AI adoption life cycle. Drawing on over 1,200 resources, this reference guide provides practitioners with the knowledge required to operationalize AI safety, security, and governance practices within their organizations.

A CSET workshop report was highlighted in an segment published by Axios in its Axios+ newsletter. The segment explores the growing push toward automating AI research and development, examining how far AI systems might go in designing, improving, and training other AI models and what that could mean for innovation, safety, and governance.

When AI Builds AI

Helen Toner, Kendrea Beers, Steve Newman, Saif M. Khan, Colin Shea-Blymyer, Evelyn Yee, Ashwin Acharya, Kathleen Fisher, Keller Scholl, Peter Wildeford, Ryan Greenblatt, Samuel Albanie, Stephanie Ballard, and Thomas Larsen
| January 2026

Leading artificial intelligence companies have started to use their own systems to accelerate research and development, with each generation of AI systems contributing to building the next generation. This report distills points of consensus and disagreement from our July 2025 expert workshop on how far the automation of AI R&D could go, laying bare crucial underlying assumptions and identifying what new evidence could shed light on the trajectory going forward.

CSET’s Andrew Lohn shared his expert perspective in an op-ed published by The National Interest. In the piece, he explains that AI-assisted hacking signals a deeper cybersecurity threat: not new tools, but the breakdown of core defenses like defense in depth against adaptive, large-scale attackers.

CSET’s Kyle Miller shared his expert analysis in an article published by WIRED. The article discusses how OpenAI’s new open-weight models are drawing significant interest from the U.S. military and defense contractors, who see potential for secure, offline, and customizable AI systems capable of supporting sensitive defense operations.

The Geopolitics of AGI | Helen Toner

80,000 Hours
| November 5, 2025

CSET’s Helen Toner was featured on the 80,000 Hours Podcast, where she discusses AI, national security, and geopolitics. Topics include China’s AI ambitions, military use of AI, global AI adoption, and recent tech leadership changes.