Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Data Brief

Identifying AI Research

Christian Schoeberl, Autumn Toney, and James Dunham
| July 2023

The choice of method for surfacing AI-relevant publications impacts the ultimate research findings. This report provides a quantitative analysis of various methods available to researchers for identifying AI-relevant research within CSET’s merged corpus, and showcases the research implications of each method.

Data Brief

The Inigo Montoya Problem for Trustworthy AI

Emelia Probasco, Autumn Toney, and Kathleen Curlee
| June 2023

When the technology and policy communities use terms associated with trustworthy AI, could they be talking past one another? This paper examines the use of trustworthy AI keywords and the potential for an “Inigo Montoya problem” in trustworthy AI, inspired by "The Princess Bride" movie quote: “You keep using that word. I do not think it means what you think it means.”

Data Brief

Building the Cybersecurity Workforce Pipeline

Luke Koslosky, Ali Crawford, and Sara Abdulla
| June 2023

Creating adequate talent pipelines for the cybersecurity workforce is an ongoing priority for the federal government. Understanding the effectiveness of current education initiatives will help policymakers make informed decisions. This report analyzes the National Centers of Academic Excellence in Cyber (NCAE-C), a consortium of institutions designated as centers of excellence by the National Security Agency. It aims to determine how NCAE-C designated institutions fare compared to other schools in graduating students with cyber-related degrees and credentials.

Data Brief

“The Main Resource is the Human”

Micah Musser, Rebecca Gelles, Ronnie Kinoshita, Catherine Aiken, and Andrew Lohn
| April 2023

Progress in artificial intelligence (AI) depends on talented researchers, well-designed algorithms, quality datasets, and powerful hardware. The relative importance of these factors is often debated, with many recent “notable” models requiring massive expenditures of advanced hardware. But how important is computational power for AI progress in general? This data brief explores the results of a survey of more than 400 AI researchers to evaluate the importance and distribution of computational needs.

Data Brief

Mapping Biosafety Level-3 Laboratories by Publications

Caroline Schuerger, Sara Abdulla, and Anna Puglisi
| August 2022

Biosafety Level-3 laboratories (BSL-3) are an essential part of research infrastructure and are used to develop vaccines and therapies. The research conducted in them provides insights into host-pathogen interactions that may help prevent future pandemics. However, these facilities also potentially pose a risk to society through lab accidents or misuse. Despite their importance, there is no comprehensive list of BSL-3 facilities, or the institutions in which they are housed. By systematically assessing PubMed articles published in English from 2006-2021, this paper maps institutions that host BSL-3 labs by their locations, augmenting current knowledge of where high-containment research is conducted globally.

Data Brief

Counting AI Research

Daniel Chou
| July 2022

Tracking the output of a country’s researchers can inform assessments of its innovativeness or assist in evaluating the impact of certain funding initiatives. However, measuring research output is not as straightforward as it may seem. Using a detailed analysis that includes Chinese-language research publications, this data brief reveals that China's lead in artificial intelligence research output is greater than many English-language sources suggest.

Data Brief

China’s State Key Laboratory System

Emily S. Weinstein, Channing Lee, Ryan Fedasiuk, and Anna Puglisi
| June 2022

China’s State Key Laboratory system drives innovation in science and technology. These labs conduct cutting-edge basic and applied research, attract and train domestic and foreign talent, and conduct academic exchanges with foreign counterparts. This report assesses trends in the research priorities, management structures, and talent recruitment efforts of nearly five hundred Chinese State Key Labs. The accompanying data visualization maps their geographical locations and host institutions.

Data Brief

China’s Industrial Clusters

Anna Puglisi and Daniel Chou
| June 2022

China is banking on applying AI to biotechnology research in order to transform itself into a “biotech superpower.” In pursuit of that goal, it has emphasized bringing together different aspects of the development cycle to foster multidisciplinary research. This data brief examines the emerging trend of co-location of AI and biotechnology researchers and explores the potential impact it will have on this growing field.

Data Brief

A Competitive Era for China’s Universities

Ryan Fedasiuk, Alan Omar Loera Martinez, and Anna Puglisi
| March 2022

This brief illuminates the scale of Chinese government funding for higher education, science, and technology by exploring budget and expense reports for key government organizations and 34 of China’s most elite “Double First Class” universities. Chinese political leaders view elite universities as key components of the country’s military modernization, economic growth, and soft power; a situation that presents security risks for international partners.

Data Brief

Exploring Clusters of Research in Three Areas of AI Safety

Helen Toner and Ashwin Acharya
| February 2022

Problems of AI safety are the subject of increasing interest for engineers and policymakers alike. This brief uses the CSET Map of Science to investigate how research into three areas of AI safety — robustness, interpretability and reward learning — is progressing. It identifies eight research clusters that contain a significant amount of research relating to these three areas and describes trends and key papers for each of them.