Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Analysis

Understanding the Global Gain-of-Function Research Landscape

Caroline Schuerger Steph Batalis Katherine Quinn Ronnie Kinoshita Owen Daniels Anna Puglisi
| August 2023

Gain- and loss-of-function research have contributed to breakthroughs in vaccine development, genetic research, and gene therapy. At the same time, a subset of gain- and loss-of-function studies involve high-risk, highly virulent pathogens that could spread widely among humans if deliberately or unintentionally released. In this report, we map the gain- and loss-of-function global research landscape using a quantitative approach that combines machine learning with subject-matter expert review.

Bio-Risk


Data


Filter publications
Analysis

Onboard AI: Constraints and Limitations

Kyle Miller Andrew Lohn
| August 2023

Artificial intelligence that makes news headlines, such as ChatGPT, typically runs in well-maintained data centers with an abundant supply of compute and power. However, these resources are more limited on many systems in the real world, such as drones, satellites, or ground vehicles. As a result, the AI that can run onboard these devices will often be inferior to state of the art models. That can affect their usability and the need for additional safeguards in high-risk contexts. This issue brief contextualizes these challenges and provides policymakers with recommendations on how to engage with these technologies.

Jenny Jun's testimony before the House Foreign Affairs Subcommittee on Indo-Pacific for a hearing titled, "Illicit IT: Bankrolling Kim Jong Un."

Analysis

Autonomous Cyber Defense

Andrew Lohn Anna Knack Ant Burke Krystal Jackson
| June 2023

The current AI-for-cybersecurity paradigm focuses on detection using automated tools, but it has largely neglected holistic autonomous cyber defense systems — ones that can act without human tasking. That is poised to change as tools are proliferating for training reinforcement learning-based AI agents to provide broader autonomous cybersecurity capabilities. The resulting agents are still rudimentary and publications are few, but the current barriers are surmountable and effective agents would be a substantial boon to society.

Data Brief

“The Main Resource is the Human”

Micah Musser Rebecca Gelles Ronnie Kinoshita Catherine Aiken Andrew Lohn
| April 2023

Progress in artificial intelligence (AI) depends on talented researchers, well-designed algorithms, quality datasets, and powerful hardware. The relative importance of these factors is often debated, with many recent “notable” models requiring massive expenditures of advanced hardware. But how important is computational power for AI progress in general? This data brief explores the results of a survey of more than 400 AI researchers to evaluate the importance and distribution of computational needs.

Analysis

Adversarial Machine Learning and Cybersecurity

Micah Musser
| April 2023

Artificial intelligence systems are rapidly being deployed in all sectors of the economy, yet significant research has demonstrated that these systems can be vulnerable to a wide array of attacks. How different are these problems from more common cybersecurity vulnerabilities? What legal ambiguities do they create, and how can organizations ameliorate them? This report, produced in collaboration with the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, presents the recommendations of a July 2022 workshop of experts to help answer these questions.

Analysis

Reducing the Risks of Artificial Intelligence for Military Decision Advantage

Wyatt Hoffman Heeu Millie Kim
| March 2023

Militaries seek to harness artificial intelligence for decision advantage. Yet AI systems introduce a new source of uncertainty in the likelihood of technical failures. Such failures could interact with strategic and human factors in ways that lead to miscalculation and escalation in a crisis or conflict. Harnessing AI effectively requires managing these risk trade-offs by reducing the likelihood, and containing the consequences of, AI failures.

Analysis

Examining Singapore’s AI Progress

Kayla Goode Heeu Millie Kim Melissa Deng
| March 2023

Despite being a small city-state, Singapore’s star continues to rise as an artificial intelligence hub presenting significant opportunities for international collaboration. Initiatives such as fast-tracking patent approval, incentivizing private investment, and addressing talent shortfalls are making the country a rapidly growing global AI hub. Such initiatives offer potential models for those seeking to leverage the technology and opportunities for collaboration in AI education and talent exchanges, research and development, and governance. The United States and Singapore share similar goals regarding the development and use of trusted and responsible AI and should continue to foster greater collaboration among public and private sector entities.

Analysis

Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk

Josh A. Goldstein Girish Sastry Micah Musser Renée DiResta Matthew Gentzel Katerina Sedova
| January 2023

Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI, and the Stanford Internet Observatory explore how language models could be misused for influence operations in the future, and provide a framework for assessing potential mitigation strategies.

CSET's Ali Crawford and Jessica Ji submitted this comment to the Office of the National Cyber Director in response to a request for information on a national strategy for a cyber workforce, training, and education.

Analysis

Downrange: A Survey of China’s Cyber Ranges

Dakota Cary
| September 2022

China is rapidly building cyber ranges that allow cybersecurity teams to test new tools, practice attack and defense, and evaluate the cybersecurity of a particular product or service. The presence of these facilities suggests a concerted effort on the part of the Chinese government, in partnership with industry and academia, to advance technological research and upskill its cybersecurity workforce—more evidence that China has entered near-peer status with the United States in the cyber domain.