Category Archive: Uncategorized

On July 31, 2025, the Trump administration released “Winning the Race: America’s AI Action Plan.” CSET has broken down the Action Plan, focusing on specific government deliverables. Our Provision and Timeline tracker breaks down which agencies are responsible for implementing recommendations and the types of actions they should take. Read More

CSET’s Helen Toner was featured on the 80,000 Hours Podcast, where she discusses AI, national security, and geopolitics. Topics include China’s AI ambitions, military use of AI, global AI adoption, and recent tech leadership changes. Read More

This blog examines 18 AI-related laws that California enacted in 2024, 8 of which are explored in more detail in an accompanying CSET Emerging Technology Observatory (ETO) blog. This blog also chronicles California’s history of regulating AI and other emerging technologies and highlights several AI bills that have moved through the California legislature in 2025. Read More

CSET’s Luke Koslosky shared his expert analysis in an article published by Newsweek. The article discusses Amazon’s recent announcement of 14,000 layoffs as the company seeks to streamline operations and prepare for workforce changes driven by emerging technologies like AI. Read More

CSET’s Kathleen Curlee shared her expert analysis in an article published by Business Insider. The article examines China’s rapid growth in space-based military capabilities and the growing competition with the United States in orbit. It highlights how these advances could affect a potential conflict over Taiwan, where China could target U.S. satellites that provide critical functions, including surveillance, communications, navigation, and coordination. Read More

Red-teaming is a popular evaluation methodology for AI systems, but it is still severely lacking in theoretical grounding and technical best practices. This blog introduces the concept of threat modeling for AI red-teaming and explores the ways that software tools can support or hinder red teams. To do effective evaluations, red-team designers should ensure their tools fit with their threat model and their testers. Read More

CSET’s Ali Crawford shared her expert perspective in an op-ed published by The National Interest. In her piece, she explores how science fiction, from Black Mirror and Her to Cyberpunk 2077, reveals that the true danger of artificial intelligence is not killer robots but the corporate systems quietly reshaping human behavior and autonomy. Read More

CSET’s Jessica Ji shared her expert analysis in an article published by Fortune. The article discusses the broader trends in AI companionship and adult-oriented content, the growing demand for emotionally engaging interactions with AI, and the challenges companies face in balancing user freedom with safety and age verification. Read More

China and the U.S. are in a close race for AI supremacy. Helen Toner, CSET executive director, explains the different strategies, with China focusing on open-source development and the U.S. relying on big tech dominance, and what “winning” in AI actually means. Read More

🔔 The number of AI-related governance documents is rapidly proliferating, but what risks, mitigations, and other concepts do these documents actually cover? MIT AI Risk Initiative researchers Simon Mylius, Peter Slattery, Yan Zhu, Alexander Saeri, Jess Graham, Michael Noetel, and Neil Thompson teamed up with CSET’s Mina Narayanan and Adrian Thinnyun to pilot an approach to map over 950 AI governance documents to several extensible taxonomies. These taxonomies cover AI risks and actors, industry sectors targeted, and other AI-related concepts, complementing AGORA’s thematic taxonomy of risk factors, harms, governance strategies, incentives for compliance, and application areas. Read More