News

In the news section, our experts take center stage in shaping discussions on technology and policy. Discover articles featuring insights from our experts or citing our research. CSET’s insights and research are pivotal in shaping key conversations within the evolving landscape of emerging technology and policy.

Featured

1 big thing: AI could soon improve on its own

Axios
| January 27, 2026

A CSET workshop report was highlighted in an segment published by Axios in its Axios+ newsletter. The segment explores the growing push toward automating AI research and development, examining how far AI systems might go in designing, improving, and training other AI models and what that could mean for innovation, safety, and governance.

Automating Cyber


CyberAI


Filter entries

CSET’s Cole McFaul shared his expert analysis in an article published by the South China Morning Post. The article examines how China’s military is systematically incorporating artificial intelligence into its operations by leveraging civilian universities and private companies under its sweeping "military-civil fusion" strategy.

A CSET report was highlighted in an article published by Defense One. The article discusses China’s growing reliance on smaller, dual-use AI companies to support the People’s Liberation Army, often in ways that obscure foreign collaboration and circumvent U.S. sanctions.

China Is Using the Private Sector to Advance Military AI

The Wall Street Journal
| September 3, 2025

CSET’s Cole McFaul and Sam Bresnick shared their expert analysis in an article published by The Wall Street Journal. The article examines how China’s military is systematically incorporating artificial intelligence into its operations by drawing on civilian universities and private companies as part of its "civil-military fusion" strategy.

CSET's Kathleen Curlee was featured in a short-form documentary published by CNBC. The documentary traces SpaceX’s rise from a struggling startup to a $400 billion company that now dominates the global space industry, while raising questions about the national security implications of U.S. dependence on a single private actor.

CSET’s Jessica Ji shared her expert analysis in an interview published by Science News. The interview discusses the U.S. government’s new action plan to integrate artificial intelligence into federal operations and highlights the significant privacy, cybersecurity, and civil liberties risks of using AI tools on consolidated sensitive data, such as health, financial, and personal records.

CSET Research Analyst, Mina Narayanan shared her expert insights in an article published by Defense One. The piece examines President Trump’s newly released AI Action Plan, which outlines a sweeping effort to secure American dominance in artificial intelligence by accelerating military adoption, fast-tracking infrastructure, and expanding U.S. influence in global AI governance.

Lauren Kahn shared her expert insights in an article published by DefenseScoop. The article discusses the Trump administration’s new executive order on “Unleashing American Drone Dominance,” which aims to accelerate domestic drone development and adoption within the Department of Defense.

CSET’s Lauren A. Kahn co-authored an op-ed published by Foreign Affairs alongside Michael C. Horowitz and Joshua A. Schwartz. The piece explores how recent drone operations by Ukraine and Israel signal a turning point in modern warfare, demonstrating the growing power of low-cost, AI-enabled systems against traditional military platforms.

China unveils mosquito-sized drone

The Telegraph
| June 24, 2025

CSET’s Sam Bresnick shared his expert insights in an article published by The Telegraph. The article discusses China’s unveiling of a mosquito-sized drone developed by scientists in Hunan province, highlighting its potential for intelligence gathering, surveillance, and special missions in places that larger drones struggle to access.

CSET’s Helen Toner shared her expert insights in an article published by HuffPost. The article discusses concerning findings from recent tests showing that advanced AI models, including OpenAI’s o3 and Anthropic’s Claude Opus 4, can exhibit deceptive, self-preserving behaviors when faced with shutdown or replacement.