News

In the news section, our experts take center stage in shaping discussions on technology and policy. Discover articles featuring insights from our experts or citing our research. CSET’s insights and research are pivotal in shaping key conversations within the evolving landscape of emerging technology and policy.

Featured

1 big thing: AI could soon improve on its own

Axios
| January 27, 2026

A CSET workshop report was highlighted in an segment published by Axios in its Axios+ newsletter. The segment explores the growing push toward automating AI research and development, examining how far AI systems might go in designing, improving, and training other AI models and what that could mean for innovation, safety, and governance.

Automating Cyber


CyberAI


Filter entries

A CSET workshop report was highlighted in an segment published by Axios in its Axios+ newsletter. The segment explores the growing push toward automating AI research and development, examining how far AI systems might go in designing, improving, and training other AI models and what that could mean for innovation, safety, and governance.

CSET’s Sam Bresnick shared his expert perspective in an article published by The Wall Street Journal. The article examines China’s military use of AI to develop autonomous drone and robot swarms, drawing inspiration from animal behavior to improve offensive and defensive capabilities.

CSET’s Andrew Lohn shared his expert perspective in an op-ed published by The National Interest. In the piece, he explains that AI-assisted hacking signals a deeper cybersecurity threat: not new tools, but the breakdown of core defenses like defense in depth against adaptive, large-scale attackers.

CSET’s Kyle Miller shared his expert analysis in an article published by WIRED. The article discusses how OpenAI’s new open-weight models are drawing significant interest from the U.S. military and defense contractors, who see potential for secure, offline, and customizable AI systems capable of supporting sensitive defense operations.

Time to Accept Risk in Defense Acquisitions

Council on Foreign Relations
| November 10, 2025

Lauren A. Kahn co-authored an analysis published by the Council on Foreign Relations alongside Erin D. Dumbacher and Michael C. Horowitz. The piece examines proposed reforms to the Pentagon’s acquisition system, which aim to speed the delivery of military capabilities and strengthen the U.S. defense enterprise in the face of emerging global challenges.

The AI Cold War That Will Redefine Everything

The Wall Street Journal
| November 10, 2025

CSET’s Helen Toner shared her expert analysis in an article published by The Wall Street Journal. The article discusses China’s accelerated push to compete with the U.S. in generative artificial intelligence.

China’s Stranded Astronauts Show the Dangers of Space Junk

Scientific American
| November 7, 2025

CSET’s Lauren Kahn shared her expert analysis in an article published by Scientific American. The article discusses the growing dangers of space debris and how increasing orbital traffic threatens satellites, space stations, and human space missions.

On July 31, 2025, the Trump administration released “Winning the Race: America’s AI Action Plan.” CSET has broken down the Action Plan, focusing on specific government deliverables. Our Provision and Timeline tracker breaks down which agencies are responsible for implementing recommendations and the types of actions they should take.

The Geopolitics of AGI | Helen Toner

80,000 Hours
| November 5, 2025

CSET’s Helen Toner was featured on the 80,000 Hours Podcast, where she discusses AI, national security, and geopolitics. Topics include China’s AI ambitions, military use of AI, global AI adoption, and recent tech leadership changes.

Mapping the AI Governance Landscape

MIT AI Risk Repository
| October 15, 2025

🔔 The number of AI-related governance documents is rapidly proliferating, but what risks, mitigations, and other concepts do these documents actually cover?

MIT AI Risk Initiative researchers Simon Mylius, Peter Slattery, Yan Zhu, Alexander Saeri, Jess Graham, Michael Noetel, and Neil Thompson teamed up with CSET’s Mina Narayanan and Adrian Thinnyun to pilot an approach to map over 950 AI governance documents to several extensible taxonomies. These taxonomies cover AI risks and actors, industry sectors targeted, and other AI-related concepts, complementing AGORA’s thematic taxonomy of risk factors, harms, governance strategies, incentives for compliance, and application areas.