Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Data Brief

Mapping India’s AI Potential

Husanjot Chahal, Sara Abdulla, Jonathan Murdick, and Ilya Rahkovsky
| March 2021

With its massive information technology workforce, thriving research community and a growing technology ecosystem, India has a significant stake in the development of artificial intelligence globally. Drawing from a variety of original CSET datasets, the authors evaluate India’s potential for AI by examining its progress across five categories of indicators pertinent to AI development: talent, research, patents, companies and investments, and compute.

Reports

Chinese Government Guidance Funds

Ngor Luong, Zachary Arnold, and Ben Murphy
| March 2021

The Chinese government is pouring money into public-private investment funds, known as guidance funds, to advance China’s strategic and emerging technologies, including artificial intelligence. These funds are mobilizing massive amounts of capital from public and private sources—prompting both concern and skepticism among outside observers. This overview presents essential findings from our full-length report on these funds, analyzing the guidance fund model, its intended benefits and weaknesses, and its long-term prospects for success.

Reports

Understanding Chinese Government Guidance Funds

Ngor Luong, Zachary Arnold, and Ben Murphy
| March 2021

China’s government is using public-private investment funds, known as guidance funds, to deploy massive amounts of capital in support of strategic and emerging technologies, including artificial intelligence. Drawing exclusively on Chinese-language sources, this report explores how guidance funds raise and deploy capital, manage their investment, and interact with public and private actors. The guidance fund model is no silver bullet, but it has many advantages over traditional industrial policy mechanisms.

Reports

Academics, AI, and APTs

Dakota Cary
| March 2021

Six Chinese universities have relationships with Advanced Persistent Threat (APT) hacking teams. Their activities range from recruitment to running cyber operations. These partnerships, themselves a case study in military-civil fusion, allow state-sponsored hackers to quickly move research from the lab to the field. This report examines these universities’ relationships with known APTs and analyzes the schools’ AI/ML research that may translate to future operational capabilities.

Data Brief

Using Machine Learning to Fill Gaps in Chinese AI Market Data

Zachary Arnold, Joanne Boisson, Lorenzo Bongiovanni, Daniel Chou, Carrie Peelman, and Ilya Rahkovsky
| February 2021

In this proof-of-concept project, CSET and Amplyfi Ltd. used machine learning models and Chinese-language web data to identify Chinese companies active in artificial intelligence. Most of these companies were not labeled or described as AI-related in two high-quality commercial datasets. The authors' findings show that using structured data alone—even from the best providers—will yield an incomplete picture of the Chinese AI landscape.

Reports

China’s STI Operations

William Hannas and Huey-Meei Chang
| January 2021

Open source intelligence (OSINT) and science and technology intelligence (STI) are realized differently in the United States and China, China putting greater value on both. In the United States’ understanding, OSINT “enables” classified reporting, while in China it is the intelligence of first resort. This contrast extends to STI which has a lower priority in the U.S. system, whereas China and its top leaders personally lavish great attention on STI and rely on it for national decisions. Establishing a “National S&T Analysis Center” within the U.S. government could help to address these challenges.

Reports

AI and the Future of Cyber Competition

Wyatt Hoffman
| January 2021

As states turn to AI to gain an edge in cyber competition, it will change the cat-and-mouse game between cyber attackers and defenders. Embracing machine learning systems for cyber defense could drive more aggressive and destabilizing engagements between states. Wyatt Hoffman writes that cyber competition already has the ingredients needed for escalation to real-world violence, even if these ingredients have yet to come together in the right conditions.

Reports

Hacking AI

Andrew Lohn
| December 2020

Machine learning systems’ vulnerabilities are pervasive. Hackers and adversaries can easily exploit them. As such, managing the risks is too large a task for the technology community to handle alone. In this primer, Andrew Lohn writes that policymakers must understand the threats well enough to assess the dangers that the United States, its military and intelligence services, and its civilians face when they use machine learning.

Reports

Universities and the Chinese Defense Technology Workforce

Ryan Fedasiuk and Emily S. Weinstein
| December 2020

To help U.S. policymakers address long-held concerns about risks and threats associated with letting Chinese university students or graduates study in the United States, CSET experts examine which forms of collaboration, and with which Chinese universities, pose the greatest risk to U.S. research security.

Reports

Automating Cyber Attacks

Ben Buchanan, John Bansemer, Dakota Cary, Jack Lucas, and Micah Musser
| November 2020

Based on an in-depth analysis of artificial intelligence and machine learning systems, the authors consider the future of applying such systems to cyber attacks, and what strategies attackers are likely or less likely to use. As nuanced, complex, and overhyped as machine learning is, they argue, it remains too important to ignore.