Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Assessment


Peer Watch


Filter entries

Promises and Progress

Ali Crawford
| November 20, 2025

The U.S. AI Action Plan is built on three familiar pillars—accelerating innovation, expanding infrastructure, and maintaining technological leadership—but its real test depends on education and training. To that end, the Trump Administration has linked the plan to two executive orders issued in April 2025: Executive Order 14277, “Advancing Artificial Intelligence Education for American Youth,” and Executive Order 14278, “Preparing Americans for High-Paying Skilled Trade Jobs of the Future.” Both orders came with tight deadlines and those windows have now closed. So where do things stand?

On July 31, 2025, the Trump administration released “Winning the Race: America’s AI Action Plan.” CSET has broken down the Action Plan, focusing on specific government deliverables. Our Provision and Timeline tracker breaks down which agencies are responsible for implementing recommendations and the types of actions they should take.

California’s Approach to AI Governance

Devin Von Arx
| November 4, 2025

This blog examines 18 AI-related laws that California enacted in 2024, 8 of which are explored in more detail in an accompanying CSET Emerging Technology Observatory (ETO) blog. This blog also chronicles California’s history of regulating AI and other emerging technologies and highlights several AI bills that have moved through the California legislature in 2025.

Red-teaming is a popular evaluation methodology for AI systems, but it is still severely lacking in theoretical grounding and technical best practices. This blog introduces the concept of threat modeling for AI red-teaming and explores the ways that software tools can support or hinder red teams. To do effective evaluations, red-team designers should ensure their tools fit with their threat model and their testers.

AI Control: How to Make Use of Misbehaving AI Agents

Kendrea Beers and Cody Rushing
| October 1, 2025

As AI agents become more autonomous and capable, organizations need new approaches to deploy them safely at scale. This explainer introduces the rapidly growing field of AI control, which offers practical techniques for organizations to get useful outputs from AI agents even when the AI agents attempt to misbehave.

China’s Artificial General Intelligence

William Hannas and Huey-Meei Chang
| August 29, 2025

Recent op-eds comparing the United States’ and China’s artificial intelligence (AI) programs fault the former for its focus on artificial general intelligence (AGI) while praising China for its success in applying AI throughout the whole of society. These op-eds overlook an important point: although China is outpacing the United States in diffusing AI across its society, China has by no means de-emphasized its state-sponsored pursuit of AGI.

AI and the Software Vulnerability Lifecycle

Chris Rohlf
| August 4, 2025

AI has the potential to transform cybersecurity through automation of vulnerability discovery, patching, and exploitation. Integrating these models with traditional software security tools allows engineers to proactively secure and harden systems earlier in the software development process.

To protect Europeans from the risks posed by artificial intelligence, the EU passed its AI Act last year. This month, the EU released a Code of Practice to help providers of general purpose AI comply with the AI Act. This blog reviews the measures set out in the new Code’s safety and security chapter, assesses how they compare to existing practices, and what the Code’s global impact might be.

On July 23, the White House published its long-awaited AI Action plan. In this post, CSET's Alex Friedland breaks down the most important takeaways.

Frontier AI capabilities show no sign of slowing down so that governance can catch up, yet national security challenges need addressing in the near term. This blog post outlines a governance approach that complements existing commitments by AI companies. This post argues the government should take targeted actions toward AI preparedness: sharing national security expertise, promoting transparency into frontier AI development, and facilitating the development of best practices.