News

In the news section, our experts take center stage in shaping discussions on technology and policy. Discover articles featuring insights from our experts or citing our research. CSET’s insights and research are pivotal in shaping key conversations within the evolving landscape of emerging technology and policy.

Featured

1 big thing: AI could soon improve on its own

Axios
| January 27, 2026

A CSET workshop report was highlighted in an segment published by Axios in its Axios+ newsletter. The segment explores the growing push toward automating AI research and development, examining how far AI systems might go in designing, improving, and training other AI models and what that could mean for innovation, safety, and governance.

Automating Cyber


CyberAI


Filter entries

The Complicated Politics of Trump’s New AI Executive Order

The National Interest
| January 29, 2026

CSET’s Vikram Venkatram, Mina Narayanan, and Jessica Ji shared their expert analysis in an op-ed published by The National Interest. The article analyzes the Trump administration’s new AI executive order and its attempt to limit state-level AI regulation, examining the legal, political, and governance challenges this approach creates.

CSET’s Helen Toner shared her expert perspective in an article published by TIME. The article examines common misconceptions about artificial intelligence in 2025, including claims that AI progress is stalling, that self-driving cars are inherently more dangerous than human drivers, and that AI systems cannot produce genuinely new knowledge.

The Diplomat interviews William C. Hannas and Huey-Meei Chang on myths and misinformation that have persisted in the policy ecosystem around China's development of AI.

CSET’s Kathleen Curlee shared her expert analysis in an article published by TIME. The article discusses the growing hazards posed by space debris, highlighting a recent incident in which China’s Shenzhou-20 spacecraft was struck by orbital debris, delaying the return of its crew from the Tiangong Space Station.

China’s Stranded Astronauts Show the Dangers of Space Junk

Scientific American
| November 7, 2025

CSET’s Lauren Kahn shared her expert analysis in an article published by Scientific American. The article discusses the growing dangers of space debris and how increasing orbital traffic threatens satellites, space stations, and human space missions.

On July 31, 2025, the Trump administration released “Winning the Race: America’s AI Action Plan.” CSET has broken down the Action Plan, focusing on specific government deliverables. Our Provision and Timeline tracker breaks down which agencies are responsible for implementing recommendations and the types of actions they should take.

CSET’s Jessica Ji shared her expert analysis in an article published by Fortune. The article discusses the broader trends in AI companionship and adult-oriented content, the growing demand for emotionally engaging interactions with AI, and the challenges companies face in balancing user freedom with safety and age verification.

Mapping the AI Governance Landscape

MIT AI Risk Repository
| October 15, 2025

🔔 The number of AI-related governance documents is rapidly proliferating, but what risks, mitigations, and other concepts do these documents actually cover?

MIT AI Risk Initiative researchers Simon Mylius, Peter Slattery, Yan Zhu, Alexander Saeri, Jess Graham, Michael Noetel, and Neil Thompson teamed up with CSET’s Mina Narayanan and Adrian Thinnyun to pilot an approach to map over 950 AI governance documents to several extensible taxonomies. These taxonomies cover AI risks and actors, industry sectors targeted, and other AI-related concepts, complementing AGORA’s thematic taxonomy of risk factors, harms, governance strategies, incentives for compliance, and application areas.

CSET’s Sam Bresnick shared his expert analysis in an op-ed published by Nikkei Asia. In his piece, he explores the evolving role of U.S. technology companies in international security, particularly in times of conflict, and examines the contrast between their decisive support for Ukraine during Russia’s 2022 invasion and the uncertainty surrounding their potential response in a Taiwan-China crisis.

Why Donald Trump’s AI Strategy Needs More Safeguards

The National Interest
| July 24, 2025

Adrian Thinnyun and Zachary Arnold shared their expert analysis in an op-ed published by The National Interest. In their piece, they examine how the United States must adopt a learning-focused, industry-led self-regulatory framework for AI, drawing lessons from the nuclear sector’s post-Three Mile Island Institute for Nuclear Power Operations to prevent a public backlash and ensure safe, widespread deployment of transformative AI technologies.