Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

Examining Singapore’s AI Progress

Kayla Goode, Heeu Millie Kim, and Melissa Deng
| March 2023

Despite being a small city-state, Singapore’s star continues to rise as an artificial intelligence hub presenting significant opportunities for international collaboration. Initiatives such as fast-tracking patent approval, incentivizing private investment, and addressing talent shortfalls are making the country a rapidly growing global AI hub. Such initiatives offer potential models for those seeking to leverage the technology and opportunities for collaboration in AI education and talent exchanges, research and development, and governance. The United States and Singapore share similar goals regarding the development and use of trusted and responsible AI and should continue to foster greater collaboration among public and private sector entities.

Reports

Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk

Josh A. Goldstein, Girish Sastry, Micah Musser, Renée DiResta, Matthew Gentzel, and Katerina Sedova
| January 2023

Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI, and the Stanford Internet Observatory explore how language models could be misused for influence operations in the future, and provide a framework for assessing potential mitigation strategies.

Formal Response

Comment to the Office of the National Cyber Director on Cyber Workforce, Training, and Education

Ali Crawford and Jessica Ji
| November 1, 2022

CSET's Ali Crawford and Jessica Ji submitted this comment to the Office of the National Cyber Director in response to a request for information on a national strategy for a cyber workforce, training, and education.

CSET's Catherine Aiken testified before the National Artificial Intelligence Advisory Committee on measuring progress in U.S. AI research and development.

Reports

Downrange: A Survey of China’s Cyber Ranges

Dakota Cary
| September 2022

China is rapidly building cyber ranges that allow cybersecurity teams to test new tools, practice attack and defense, and evaluate the cybersecurity of a particular product or service. The presence of these facilities suggests a concerted effort on the part of the Chinese government, in partnership with industry and academia, to advance technological research and upskill its cybersecurity workforce—more evidence that China has entered near-peer status with the United States in the cyber domain.

Reports

Will AI Make Cyber Swords or Shields?

Andrew Lohn and Krystal Jackson
| August 2022

Funding and priorities for technology development today determine the terrain for digital battles tomorrow, and they provide the arsenals for both attackers and defenders. Unfortunately, researchers and strategists disagree on which technologies will ultimately be most beneficial and which cause more harm than good. This report provides three examples showing that, while the future of technology is impossible to predict with certainty, there is enough empirical data and mathematical theory to have these debates with more rigor.

Reports

U.S. High School Cybersecurity Competitions

Kayla Goode, Ali Crawford, and Christopher Back
| July 2022

In the current cyber-threat environment, a well-educated workforce is critical to U.S. national security. Today, however, nearly six hundred thousand cybersecurity positions remain unfilled across the public and private sectors. This report explores high school cybersecurity competitions as a potential avenue for increasing the domestic cyber talent pipeline. The authors examine the competitions, their reach, and their impact on students’ educational and professional development.

Reports

Will AI Make Cyber Swords or Shields

Andrew Lohn
| July 27, 2022

We aim to demonstrate the value of mathematical models for policy debates about technological progress in cybersecurity by considering phishing, vulnerability discovery, and the dynamics between patching and exploitation. We then adjust the inputs to those mathematical models to match some possible advances in their underlying technology.

Adversarial patches are images designed to fool otherwise well-performing neural network-based computer vision models. Although these attacks were initially conceived of and studied digitally, in that the raw pixel values of the image were perturbed, recent work has demonstrated that these attacks can successfully transfer to the physical world. This can be accomplished by printing out the patch and adding it into scenes of newly captured images or video footage.

CSET Senior Fellow Andrew Lohn testified before the House of Representatives Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection, and Innovation at a hearing on "Securing the Future: Harnessing the Potential of Emerging Technologies While Mitigating Security Risks." Lohn discussed the application of AI systems in cybersecurity and AI’s vulnerabilities.