Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Assessment


Peer Watch


Filter entries

The U.S. AI Action Plan is built on three familiar pillars—accelerating innovation, expanding infrastructure, and maintaining technological leadership—but its real test depends on education and training. To that end, the Trump Administration has linked the plan to two executive orders issued in April 2025: Executive Order 14277, “Advancing Artificial Intelligence Education for American Youth,” and Executive Order 14278, “Preparing Americans for High-Paying Skilled Trade Jobs of the Future.” Both orders came with tight deadlines and those windows have now closed. So where do things stand?

Red-teaming is a popular evaluation methodology for AI systems, but it is still severely lacking in theoretical grounding and technical best practices. This blog introduces the concept of threat modeling for AI red-teaming and explores the ways that software tools can support or hinder red teams. To do effective evaluations, red-team designers should ensure their tools fit with their threat model and their testers.

AI Control: How to Make Use of Misbehaving AI Agents

Kendrea Beers and Cody Rushing
| October 1, 2025

As AI agents become more autonomous and capable, organizations need new approaches to deploy them safely at scale. This explainer introduces the rapidly growing field of AI control, which offers practical techniques for organizations to get useful outputs from AI agents even when the AI agents attempt to misbehave.

China’s Artificial General Intelligence

William Hannas and Huey-Meei Chang
| August 29, 2025

Recent op-eds comparing the United States’ and China’s artificial intelligence (AI) programs fault the former for its focus on artificial general intelligence (AGI) while praising China for its success in applying AI throughout the whole of society. These op-eds overlook an important point: although China is outpacing the United States in diffusing AI across its society, China has by no means de-emphasized its state-sponsored pursuit of AGI.

AI and the Software Vulnerability Lifecycle

Chris Rohlf
| August 4, 2025

AI has the potential to transform cybersecurity through automation of vulnerability discovery, patching, and exploitation. Integrating these models with traditional software security tools allows engineers to proactively secure and harden systems earlier in the software development process.

Frontier AI capabilities show no sign of slowing down so that governance can catch up, yet national security challenges need addressing in the near term. This blog post outlines a governance approach that complements existing commitments by AI companies. This post argues the government should take targeted actions toward AI preparedness: sharing national security expertise, promoting transparency into frontier AI development, and facilitating the development of best practices.

How Prize Competitions Enable AI Innovation

Ali Crawford
| June 10, 2025

Federal prize competitions can help the U.S. government build a research and development ecosystem that incentivizes AI and cyber innovation and delivers for the American people. Over the last five years, prize competitions for AI and cyber innovation increased nearly 60%. When leveraged effectively, federal prize competitions offer unique benefits and can advance knowledge within a particular field or solicit solutions for specific government problems.

Despite recent upheaval in the AI policy landscape, AI evaluations—including AI red-teaming—will remain fundamental to understanding and governing the usage of AI systems and their impact on society. This blog post draws from a December 2024 CSET workshop on AI testing to outline challenges associated with improving red-teaming and suggest recommendations on how to address those challenges.

This blog post describes key takeaways from the NATO-Ukraine Defense Innovators Forum, held in Krakow, Poland in June 2024. It overviews changing concepts of operation, battlefield realities, and technological aspirations and innovations in Ukraine, with a focus on uncrewed aerial vehicles (UAVs) and counter-UAV systems. It builds upon CSET’s previous blog from the Future of Drones in Ukraine conference held in Warsaw in November 2023.

Revisiting AI Red-Teaming

Jessica Ji and Colin Shea-Blymyer
| September 26, 2024

This year, CSET researchers returned to the DEF CON cybersecurity conference to explore how understandings of AI red-teaming practices have evolved among cybersecurity practitioners and AI experts. This blog post, a companion to "How I Won DEF CON’s Generative AI Red-Teaming Challenge", summarizes our takeaways and concludes with a list of outstanding research questions regarding AI red-teaming, some of which CSET hopes to address in future work.