Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Data Brief

Bayh-Dole Patent Trends

Sara Abdulla and Jack Corrigan
| August 2023

This brief examines trends in patents generated through federally funded research, otherwise known as Bayh-Dole patents. We find that while Bayh-Dole patents make up a small proportion of U.S. patents overall, they are much more common in certain fields, especially in biosciences and national defense related fields. Academic institutions are major recipients of Bayh-Dole patents, and the funding landscape for patent-producing research has shifted since Bayh-Dole came into effect in 1980.

Reports

Onboard AI: Constraints and Limitations

Kyle Miller and Andrew Lohn
| August 2023

Artificial intelligence that makes news headlines, such as ChatGPT, typically runs in well-maintained data centers with an abundant supply of compute and power. However, these resources are more limited on many systems in the real world, such as drones, satellites, or ground vehicles. As a result, the AI that can run onboard these devices will often be inferior to state of the art models. That can affect their usability and the need for additional safeguards in high-risk contexts. This issue brief contextualizes these challenges and provides policymakers with recommendations on how to engage with these technologies.

Reports

Confidence-Building Measures for Artificial Intelligence

Andrew Lohn
| August 3, 2023

Foundation models could eventually introduce several pathways for undermining state security: accidents, inadvertent escalation, unintentional conflict, the proliferation of weapons, and the interference with human diplomacy are just a few on a long list. The Confidence-Building Measures for Artificial Intelligence workshop hosted by the Geopolitics Team at OpenAI and the Berkeley Risk and Security Lab at the University of California brought together a multistakeholder group to think through the tools and strategies to mitigate the potential risks introduced by foundation models to international security.

Jenny Jun's testimony before the House Foreign Affairs Subcommittee on Indo-Pacific for a hearing titled, "Illicit IT: Bankrolling Kim Jong Un."

Data Brief

Voices of Innovation

Sara Abdulla and Husanjot Chahal
| July 2023

This data brief identifies the most influential AI researchers in the United States between 2010 and 2021 via three metrics: number of AI publications, citations, and AI h-index. It examines their demographic profiles, career trajectories, and research collaboration rates, finding that most are men in the later stages of their career, largely concentrated in 10 elite universities and companies, and that nearly 70 percent of America’s top AI researchers were born abroad.

Data Brief

Who Cares About Trust?

Autumn Toney and Emelia Probasco
| July 2023

Artificial intelligence-enabled systems are transforming society and driving an intense focus on what policy and technical communities can do to ensure that those systems are trustworthy and used responsibly. This analysis draws on prior work about the use of trustworthy AI terms to identify 18 clusters of research papers that contribute to the development of trustworthy AI. In identifying these clusters, the analysis also reveals that some concepts, like "explainability," are forming distinct research areas, whereas other concepts, like "reliability," appear to be accepted as metrics and broadly applied.

Data Snapshot

Tracking Industry in Government Contracts

Christian Schoeberl
| July 19, 2023

Data Snapshots are informative descriptions and quick analyses that dig into CSET’s unique data resources. This short series explores how government procurement data can shed light on federal technological interest and utilization. It analyzes contract metadata, provided in a collaborative project with Govini, to track key emerging technologies through the federal procurement process.

Data Brief

Identifying AI Research

Christian Schoeberl, Autumn Toney, and James Dunham
| July 2023

The choice of method for surfacing AI-relevant publications impacts the ultimate research findings. This report provides a quantitative analysis of various methods available to researchers for identifying AI-relevant research within CSET’s merged corpus, and showcases the research implications of each method.

Data Snapshot

Examining Key Tech Areas in Government Contracts Data

Christian Schoeberl
| July 6, 2023

Data Snapshots are informative descriptions and quick analyses that dig into CSET’s unique data resources. This short series explores how government procurement data can shed light on federal technological interest and utilization. It analyzes contract metadata, provided in a collaborative project with Govini, to track key emerging technologies through the federal procurement process.

Reports

Autonomous Cyber Defense

Andrew Lohn, Anna Knack, Ant Burke, and Krystal Jackson
| June 2023

The current AI-for-cybersecurity paradigm focuses on detection using automated tools, but it has largely neglected holistic autonomous cyber defense systems — ones that can act without human tasking. That is poised to change as tools are proliferating for training reinforcement learning-based AI agents to provide broader autonomous cybersecurity capabilities. The resulting agents are still rudimentary and publications are few, but the current barriers are surmountable and effective agents would be a substantial boon to society.