Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

AI and Compute

Andrew Lohn and Micah Musser
| January 2022

Between 2012 and 2018, the amount of computing power used by record-breaking artificial intelligence models doubled every 3.4 months. Even with money pouring into the AI field, this trendline is unsustainable. Because of cost, hardware availability and engineering difficulties, the next decade of AI can't rely exclusively on applying more and more computing power to drive further progress.

Data Brief

Comparing U.S. and Chinese Contributions to High-Impact AI Research

Ashwin Acharya and Brian Dunn
| January 2022

In the past decade, Chinese researchers have become increasingly prolific authors of highly cited AI publications, approaching the global research share of their U.S. counterparts. However, some analysts question the impact of Chinese publications; are they well respected internationally, and do they cover important topics? In this data brief, the authors build on prior analyses of top AI publications to provide a richer understanding of the two countries’ contributions to high-impact AI research.

Reports

AI and the Future of Disinformation Campaigns

Katerina Sedova, Christine McNeill, Aurora Johnson, Aditi Joshi, and Ido Wulkan
| December 2021

Artificial intelligence offers enormous promise to advance progress and powerful capabilities to disrupt it. This policy brief is the second installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation campaigns. Building on the RICHDATA framework, this report describes how AI can supercharge current techniques to increase the speed, scale, and personalization of disinformation campaigns.

Reports

Making AI Work for Cyber Defense

Wyatt Hoffman
| December 2021

Artificial intelligence will play an increasingly important role in cyber defense, but vulnerabilities in AI systems call into question their reliability in the face of evolving offensive campaigns. Because securing AI systems can require trade-offs based on the types of threats, defenders are often caught in a constant balancing act. This report explores the challenges in AI security and their implications for deploying AI-enabled cyber defenses at scale.

Reports

AI and the Future of Disinformation Campaigns

Katerina Sedova, Christine McNeill, Aurora Johnson, Aditi Joshi, and Ido Wulkan
| December 2021

Artificial intelligence offers enormous promise to advance progress, and powerful capabilities to disrupt it. This policy brief is the first installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns.

Reports

Staying Ahead

Diana Gehlhaus
| November 2021

This research agenda provides a roadmap for the next phase of CSET’s line of research on the U.S. AI workforce. Our goal is to assist policymakers and other stakeholders in the national security community to create policies that will ensure the United States maintains its competitive advantage in AI talent. We welcome comments, feedback and input on this vision at cset@georgetown.edu.

Reports

Federal Prize Competitions

Ali Crawford and Ido Wulkan
| November 2021

In science and technology, U.S. federal prize competitions are a way to promote innovation, advance knowledge, and solicit technological solutions to problems. In this report, the authors identify the unique advantages of such competitions over traditional R&D processes, and how these advantages might benefit artificial intelligence research.

Data Visualization

AI Education Catalog

Claire Perkins, Diana Gehlhaus, Kayla Goode, Jennifer Melot, Ehrik Aldana, Grace Doerfler, and Gayani Gamage
| October 2021

Created through a joint partnership between CSET and the AI Education Project, the AI Education Catalog aims to raise awareness of the AI-related programs available to students and educators, as well as to help inform AI education and workforce policy.

Reports

U.S. AI Workforce: Policy Recommendations

Diana Gehlhaus, Luke Koslosky, Kayla Goode, and Claire Perkins
| October 2021

This policy brief addresses the need for a clearly defined artificial intelligence education and workforce policy by providing recommendations designed to grow, sustain, and diversify the U.S. AI workforce. The authors employ a comprehensive definition of the AI workforce—technical and nontechnical occupations—and provide data-driven policy goals. Their recommendations are designed to leverage opportunities within the U.S. education and training system while mitigating its challenges, and prioritize equity in access and opportunity to AI education and AI careers.

Reports

The DOD’s Hidden Artificial Intelligence Workforce

Diana Gehlhaus, Ron Hodge, Luke Koslosky, Kayla Goode, and Jonathan Rotner
| September 2021

This policy brief, authored in collaboration with the MITRE Corporation, provides a new perspective on the U.S. Department of Defense’s struggle to recruit and retain artificial intelligence talent. The authors find that the DOD already has a cadre of AI and related experts, but that this talent remains hidden. Better leveraging this talent could go a long way in meeting the DOD’s AI objectives. The authors argue that this can be done through policies that more effectively identify AI talent and assignment opportunities, processes that incentivize experimentation and changes in career paths, and investing in the necessary technological infrastructure.