Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Data Brief

U.S. AI Summer Camps

Claire Perkins and Kayla Goode
| August 2021

Summer camps are an integral part of many U.S. students’ education, but little is known about camps that focus on artificial intelligence education. This data brief maps out the AI summer camp landscape in the United States and explores the camps’ locations, target age ranges, price, and hosting organization type.

Data Visualization

National Cybersecurity Center Map

Dakota Cary and Jennifer Melot
| July 2021

China wants to be a “cyber powerhouse” (网络强国). At the heart of this mission is the sprawling 40 km2 campus of the National Cybersecurity Center. Formally called the National Cybersecurity Talent and Innovation Base (国家网络安全人才与创新基地), the NCC is being built in Wuhan. The campus, which China began constructing in 2017 and is still building, includes seven centers for research, talent cultivation, and entrepreneurship; two government-focused laboratories; and a National Cybersecurity School.

Reports

U.S. Demand for AI Certifications

Diana Gehlhaus and Ines Pancorbo
| June 2021

This issue brief explores whether artificial intelligence and AI-related certifications serve as potential pathways to enter the U.S. AI workforce. The authors find that according to U.S. AI occupation job postings data over 2010–2020, there is little demand from employers for AI and AI-related certifications. From this perspective, such certifications appear to present more hype than promise.

Reports

Truth, Lies, and Automation

Ben Buchanan, Andrew Lohn, Micah Musser, and Katerina Sedova
| May 2021

Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.

Reports

U.S. AI Workforce

Diana Gehlhaus and Ilya Rahkovsky
| April 2021

A lack of good data on the U.S. artificial intelligence workforce limits the potential effectiveness of policies meant to increase and cultivate this cadre of talent. In this issue brief, the authors bridge that information gap with new analysis on the state of the U.S. AI workforce, along with insight into the ongoing concern over AI talent shortages. Their findings suggest some segments of the AI workforce are more likely than others to be experiencing a supply-demand gap.

Reports

Assessing the Scope of U.S. Visa Restrictions on Chinese Students

Remco Zwetsloot, Emily S. Weinstein, and Ryan Fedasiuk
| February 2021

In May 2020, the White House announced it would deny visas to Chinese graduate students and researchers who are affiliated with organizations that implement or support China’s military-civil fusion strategy. The authors discuss several ways this policy might be implemented. Based on Chinese and U.S. policy documents and data sources, they estimate that between three and five thousand Chinese students might be prevented from entering U.S. graduate programs each year.

Reports

The U.S. AI Workforce

Diana Gehlhaus and Santiago Mutis
| January 2021

As the United States seeks to maintain a competitive edge in artificial intelligence, the strength of its AI workforce will be of paramount importance. In order to understand the current state of the domestic AI workforce, Diana Gehlhaus and Santiago Mutis define the AI workforce and offer a preliminary assessment of its size, composition, and key characteristics. Among their findings: The domestic supply of AI talent consisted of an estimated 14 million workers (or about 9% of total U.S. employment) as of 2018.

Data Brief

Most of America’s “Most Promising” AI Startups Have Immigrant Founders

Tina Huang, Zachary Arnold, and Remco Zwetsloot
| October 2020

Half of Silicon Valley’s startups have at least one foreign-born founder, and immigrants are twice as likely as native-born Americans to start new businesses. To understand how immigration shapes AI entrepreneurship in particular in the United States, Huang, Arnold and Zwetsloot analyze the 2019 AI 50, Forbes’s list of the “most promising” U.S.-based AI startups. They find that 66 percent of these startups had at least one immigrant founder. The authors write that policymakers should consider lifting some current immigration restrictions and creating new pathways for entrepreneurs.

Reports

Estimating the Number of Chinese STEM Students in the United States

Jacob Feldgoise and Remco Zwetsloot
| October 2020

In recent years, concern has grown about the risks of Chinese nationals studying science, technology, engineering and mathematics (STEM) subjects at U.S. universities. This data brief estimates the number of Chinese students in the United States in detail, according to their fields of study and degree level. Among its findings: Chinese nationals comprise 16 percent of all graduate STEM students and 2 percent of undergraduate STEM students, lower proportions than were previously suggested in U.S. government reports.

Reports

Downscaling Attack and Defense

Andrew Lohn
| October 7, 2020

The resizing of images, which is typically a required part of preprocessing for computer vision systems, is vulnerable to attack. Images can be created such that the image is completely different at machine-vision scales than at other scales and the default settings for some common computer vision and machine learning systems are vulnerable.