Reports

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

The Long-Term Stay Rates of International STEM PhD Graduates

Jack Corrigan, James Dunham, and Remco Zwetsloot
| April 2022

This issue brief uses data from the National Science Foundation’s Survey of Doctorate Recipients to explore how many of the international students who earn STEM PhDs from U.S. universities stay in the country after graduation. The authors trace the journeys that these graduates take through the immigration system and find that most remain in the United States long after earning their degrees.

Data Brief

Exploring Clusters of Research in Three Areas of AI Safety

Helen Toner and Ashwin Acharya
| February 2022

Problems of AI safety are the subject of increasing interest for engineers and policymakers alike. This brief uses the CSET Map of Science to investigate how research into three areas of AI safety — robustness, interpretability and reward learning — is progressing. It identifies eight research clusters that contain a significant amount of research relating to these three areas and describes trends and key papers for each of them.

Data Visualization

Classifying AI Systems

Catherine Aiken and Brian Dunn
| December 2021

​​This Classifying AI Systems Interactive presents several AI system classification frameworks developed to distill AI systems into concise, comparable and policy-relevant dimensions. It provides key takeaways and framework-specific results from CSET’s analysis of more than 1,800 system classifications done by survey respondents using the frameworks. You can explore the frameworks and example AI systems used in the survey, and even take the survey.

Data Brief

Chinese and U.S. University Rankings

Jack Corrigan and Simon Rodriguez
| January 2022

The strength of a country’s talent pipeline depends in no small part on the quality of its universities. This data brief explores how Chinese and U.S. universities perform in two different global university rankings, why their standings have changed over time, and what those trends mean for graduates.

Data Brief

Comparing U.S. and Chinese Contributions to High-Impact AI Research

Ashwin Acharya and Brian Dunn
| January 2022

In the past decade, Chinese researchers have become increasingly prolific authors of highly cited AI publications, approaching the global research share of their U.S. counterparts. However, some analysts question the impact of Chinese publications; are they well respected internationally, and do they cover important topics? In this data brief, the authors build on prior analyses of top AI publications to provide a richer understanding of the two countries’ contributions to high-impact AI research.

Reports

Key Concepts in AI Safety: Specification in Machine Learning

Tim G. J. Rudner and Helen Toner
| December 2021

This paper is the fourth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” outlined three categories of AI safety issues—problems of robustness, assurance, and specification—and the subsequent two papers described problems of robustness and assurance, respectively. This paper introduces specification as a key element in designing modern machine learning systems that operate as intended.

Reports

Staying Ahead

Diana Gehlhaus
| November 2021

This research agenda provides a roadmap for the next phase of CSET’s line of research on the U.S. AI workforce. Our goal is to assist policymakers and other stakeholders in the national security community to create policies that will ensure the United States maintains its competitive advantage in AI talent. We welcome comments, feedback and input on this vision at cset@georgetown.edu.

Data Brief

Classifying AI Systems

Catherine Aiken
| November 2021

This brief explores the development and testing of artificial intelligence system classification frameworks intended to distill AI systems into concise, comparable and policy-relevant dimensions. Comparing more than 1,800 system classifications, it points to several factors that increase the utility of a framework for human classification of AI systems and enable AI system management, risk assessment and governance.

Data Visualization

AI Education Catalog

Claire Perkins, Diana Gehlhaus, Kayla Goode, Jennifer Melot, Ehrik Aldana, Grace Doerfler, and Gayani Gamage
| October 2021

Created through a joint partnership between CSET and the AI Education Project, the AI Education Catalog aims to raise awareness of the AI-related programs available to students and educators, as well as to help inform AI education and workforce policy.

Reports

U.S. AI Workforce: Policy Recommendations

Diana Gehlhaus, Luke Koslosky, Kayla Goode, and Claire Perkins
| October 2021

This policy brief addresses the need for a clearly defined artificial intelligence education and workforce policy by providing recommendations designed to grow, sustain, and diversify the U.S. AI workforce. The authors employ a comprehensive definition of the AI workforce—technical and nontechnical occupations—and provide data-driven policy goals. Their recommendations are designed to leverage opportunities within the U.S. education and training system while mitigating its challenges, and prioritize equity in access and opportunity to AI education and AI careers.