As artificial intelligence introduces new risks, some potentially catastrophic or even existential, there is little data or detailed theory to assess them. Policymakers often resort to expert best guesses for the probability of doom but probability is not always the most appropriate tool, especially for the types of uncertainties in AI risk. This report details a brief introduction to Belief and Plausibility, which provides an alternative approach that is mathematically rigorous, uses familiar vocabulary, and only requires policymakers to ask two simple questions.
Washington, D.C. (April 30, 2026) — This morning, Andrew Lohn, Senior Fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), testified before the U.S.-China Economic and Security Review Commission.
Washington, D.C. (April 22, 2026) — This morning, Helen Toner, Interim Executive Director at Georgetown University’s Center for Security and Emerging Technology (CSET), testified before the U.S. Senate Committee on the Judiciary. The hearing, “Stealth Stealing: China’s Ongoing Theft of U.S. Innovation,” examined policy solutions to maintain U.S. technological leadership and strengthen U.S. intellectual property (IP) protections.
In a new Security Studies article, Renee DiResta and Josh A. Goldstein lay out how state-backed propagandists run “full-spectrum” propaganda campaigns, relying on overt and covert tools across broadcast and social media.
Organizations face growing pressure to adopt artificial intelligence, but often lack practical guidance on how to do so effectively. This report bridges the gap between high-level principles and real-world implementation, offering actionable steps across the AI adoption life cycle. Drawing on over 1,200 resources, this reference guide provides practitioners with the knowledge required to operationalize AI safety, security, and governance practices within their organizations.
A CSET workshop report was highlighted in an segment published by Axios in its Axios+ newsletter. The segment explores the growing push toward automating AI research and development, examining how far AI systems might go in designing, improving, and training other AI models and what that could mean for innovation, safety, and governance.
Helen Toner, Kendrea Beers, Steve Newman, Saif M. Khan, Colin Shea-Blymyer, Evelyn Yee, Ashwin Acharya, Kathleen Fisher, Keller Scholl, Peter Wildeford, Ryan Greenblatt, Samuel Albanie, Stephanie Ballard, and Thomas Larsen
| January 2026
Leading artificial intelligence companies have started to use their own systems to accelerate research and development, with each generation of AI systems contributing to building the next generation. This report distills points of consensus and disagreement from our July 2025 expert workshop on how far the automation of AI R&D could go, laying bare crucial underlying assumptions and identifying what new evidence could shed light on the trajectory going forward.
CSET’s Andrew Lohn shared his expert perspective in an op-ed published by The National Interest. In the piece, he explains that AI-assisted hacking signals a deeper cybersecurity threat: not new tools, but the breakdown of core defenses like defense in depth against adaptive, large-scale attackers.
CSET’s Kyle Miller shared his expert analysis in an article published by WIRED. The article discusses how OpenAI’s new open-weight models are drawing significant interest from the U.S. military and defense contractors, who see potential for secure, offline, and customizable AI systems capable of supporting sensitive defense operations.
CSET’s Helen Toner was featured on the 80,000 Hours Podcast, where she discusses AI, national security, and geopolitics. Topics include China’s AI ambitions, military use of AI, global AI adoption, and recent tech leadership changes.
This website uses cookies.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.