News

In the news section, our experts take center stage in shaping discussions on technology and policy. Discover articles featuring insights from our experts or citing our research. CSET’s insights and research are pivotal in shaping key conversations within the evolving landscape of emerging technology and policy.

Featured

1 big thing: AI could soon improve on its own

Axios
| January 27, 2026

A CSET workshop report was highlighted in an segment published by Axios in its Axios+ newsletter. The segment explores the growing push toward automating AI research and development, examining how far AI systems might go in designing, improving, and training other AI models and what that could mean for innovation, safety, and governance.

Automating Cyber


CyberAI


Filter entries

CSET’s Hanna Dohmen shared her expert analysis in an article published by CNBC. The article discusses how China continues to advance in artificial intelligence despite U.S. restrictions on access to Nvidia’s most advanced chips.

On July 31, 2025, the Trump administration released “Winning the Race: America’s AI Action Plan.” CSET has broken down the Action Plan, focusing on specific government deliverables. Our Provision and Timeline tracker breaks down which agencies are responsible for implementing recommendations and the types of actions they should take.

CSET’s Luke Koslosky shared his expert analysis in an article published by Newsweek. The article discusses Amazon’s recent announcement of 14,000 layoffs as the company seeks to streamline operations and prepare for workforce changes driven by emerging technologies like AI.

Mapping the AI Governance Landscape

MIT AI Risk Repository
| October 15, 2025

🔔 The number of AI-related governance documents is rapidly proliferating, but what risks, mitigations, and other concepts do these documents actually cover?

MIT AI Risk Initiative researchers Simon Mylius, Peter Slattery, Yan Zhu, Alexander Saeri, Jess Graham, Michael Noetel, and Neil Thompson teamed up with CSET’s Mina Narayanan and Adrian Thinnyun to pilot an approach to map over 950 AI governance documents to several extensible taxonomies. These taxonomies cover AI risks and actors, industry sectors targeted, and other AI-related concepts, complementing AGORA’s thematic taxonomy of risk factors, harms, governance strategies, incentives for compliance, and application areas.

How Trump’s new H-1B fee will hurt Silicon Valley and AI startups

Bulletin of the Atomic Scientists
| October 3, 2025

CSET’s Luke Koslosky shared his expert perspective in an article published by the Bulletin of the Atomic Scientists. The article examines proposed changes to the H-1B visa program and how a $100,000 application fee and wage-based selection system could undermine U.S. leadership in artificial intelligence by restricting access to foreign talent.

CSET’s Luke Koslosky shared his expert analysis in an article published by The Hill. The article discusses President Trump’s decision to raise the H-1B visa application fee to $100,000, highlighting the potential impact on the U.S. tech industry and its ability to attract skilled foreign workers.

CSET’s Jacob Feldgoise shared his expert analysis in a segment published by NPR’s All Things Considered. The segment discusses the U.S. government’s 10% stake in Intel, framing the move as part of broader efforts to reduce reliance on foreign chipmakers and secure U.S. leadership in advanced semiconductor manufacturing.

CSET’s Jacob Feldgoise shared his expert analysis in an article published by BBC. The article discusses the U.S. government’s 10% stake in Intel, highlighting the move as part of broader efforts to strengthen domestic semiconductor production and maintain U.S. technological leadership.

CSET’s Jacob Feldgoise shared his expert analysis in an article published by Bloomberg. The article discusses a controversial revenue-sharing deal in which Nvidia and AMD agreed to pay 15% of their Chinese AI chip sales to the U.S. government, highlighting how the Trump administration has softened export controls in exchange for financial concessions.

Why Donald Trump’s AI Strategy Needs More Safeguards

The National Interest
| July 24, 2025

Adrian Thinnyun and Zachary Arnold shared their expert analysis in an op-ed published by The National Interest. In their piece, they examine how the United States must adopt a learning-focused, industry-led self-regulatory framework for AI, drawing lessons from the nuclear sector’s post-Three Mile Island Institute for Nuclear Power Operations to prevent a public backlash and ensure safe, widespread deployment of transformative AI technologies.