CyberAI

Red-teaming is a popular evaluation methodology for AI systems, but it is still severely lacking in theoretical grounding and technical best practices. This blog introduces the concept of threat modeling for AI red-teaming and explores the ways that software tools can support or hinder red teams. To do effective evaluations, red-team designers should ensure their tools fit with their threat model and their testers.

The Use of Open Models in Research

Kyle Miller, Mia Hoffmann, and Rebecca Gelles
| October 2025

This report analyzes over 250 scientific publications that use open language models in ways that require access to model weights and derives a taxonomy of use cases that open weights enable. The authors identified a diverse range of seven open-weight use cases that allow researchers to investigate a wider scope of questions, explore more avenues of experimentation, and implement a larger set of techniques.

AI Control: How to Make Use of Misbehaving AI Agents

Kendrea Beers and Cody Rushing
| October 1, 2025

As AI agents become more autonomous and capable, organizations need new approaches to deploy them safely at scale. This explainer introduces the rapidly growing field of AI control, which offers practical techniques for organizations to get useful outputs from AI agents even when the AI agents attempt to misbehave.

Harmonizing AI Guidance: Distilling Voluntary Standards and Best Practices into a Unified Framework

Kyle Crichton, Abhiram Reddy, Jessica Ji, Ali Crawford, Mia Hoffmann, Colin Shea-Blymyer, and John Bansemer
| September 2025

Organizations looking to adopt artificial intelligence (AI) systems face the challenge of deciphering a myriad of voluntary standards and best practices—requiring time, resources, and expertise that many cannot afford. To address this problem, this report distills over 7,000 recommended practices from 52 reports into a single harmonized framework. Integrating new AI guidance with existing safety and security practices, this work provides a road map for organizations navigating the complex landscape of AI guidance.

China’s Artificial General Intelligence

William Hannas and Huey-Meei Chang
| August 29, 2025

Recent op-eds comparing the United States’ and China’s artificial intelligence (AI) programs fault the former for its focus on artificial general intelligence (AGI) while praising China for its success in applying AI throughout the whole of society. These op-eds overlook an important point: although China is outpacing the United States in diffusing AI across its society, China has by no means de-emphasized its state-sponsored pursuit of AGI.

CSET’s Jessica Ji shared her expert analysis in an interview published by Science News. The interview discusses the U.S. government’s new action plan to integrate artificial intelligence into federal operations and highlights the significant privacy, cybersecurity, and civil liberties risks of using AI tools on consolidated sensitive data, such as health, financial, and personal records.

AI and the Software Vulnerability Lifecycle

Chris Rohlf
| August 4, 2025

AI has the potential to transform cybersecurity through automation of vulnerability discovery, patching, and exploitation. Integrating these models with traditional software security tools allows engineers to proactively secure and harden systems earlier in the software development process.

Frontier AI capabilities show no sign of slowing down so that governance can catch up, yet national security challenges need addressing in the near term. This blog post outlines a governance approach that complements existing commitments by AI companies. This post argues the government should take targeted actions toward AI preparedness: sharing national security expertise, promoting transparency into frontier AI development, and facilitating the development of best practices.

This roundtable report explores how practitioners, researchers, educators, and government officials view work-based learning as a tool for strengthening the cybersecurity workforce. Participants engaged in an enriching discussion that ultimately provided insight and context into what makes work-based learning unique, effective, and valuable for the cyber workforce.

AI System-to-Model Innovation

Jonah Schiestle and Andrew Imbrie
| July 2025

System-to-model innovation is an emerging innovation pathway in artificial intelligence that has driven progress in several prominent areas over the last decade. System-level innovations advance with the diffusion of AI and expand the base of contributors to leading-edge progress in the field. Countries that can identify and harness system-level innovations faster and more comprehensively will gain crucial economic and military advantages over competitors. This paper analyzes the benefits of system-to-model innovation and suggests a three-part framework to navigate the policy implications: protect, diffuse, and anticipate.