CyberAI

A report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory offers a comprehensive analysis of how AI-generated text can spread influence operations and offers some thoughtful ideas on what governments, AI developers, and tech platforms might do about it.

Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk

Josh A. Goldstein Girish Sastry Micah Musser Renée DiResta Matthew Gentzel Katerina Sedova
| January 2023

Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI, and the Stanford Internet Observatory explore how language models could be misused for influence operations in the future, and provide a framework for assessing potential mitigation strategies.

Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk

Josh A. Goldstein Girish Sastry Micah Musser Renée DiResta Matthew Gentzel Katerina Sedova
| January 2023

Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI, and the Stanford Internet Observatory explore how language models could be misused for influence operations in the future, and provide a framework for assessing potential mitigation strategies.

Compute Accounting Principles Can Help Reduce AI Risks

Tech Policy Press
| November 30, 2022

In an opinion piece for Tech Policy Press, CSET's Krystal Jackson, Karson Elmgren, Jacob Feldgoise, and their coauthor Andrew Critch wrote about computational power as a key factor driving AI progress.

A Plea: The Case for Digital Environmentalism

Andrew Burt Daniel E. Geer, Jr.
| November 2022

Digital technology, the defining innovation of the last half century, has deep and unaddressed insecurities at its core. This paper, authored by two prominent technologists and strategic thinkers, argues that a new form of “digital environmentalism”—marked by a re-evaluation of our relationship to technology, growth, and innovation—is the only way to fix such insecurities, and to bring meaningful change to the digital world.

In a piece examining Google's work on various AI projects, Axios highlights the potential for AI to turbocharge disinformation campaigns and cites CSET's work examining this possibility.

CSET's Josh Goldstein hosts a panel of experts to discuss large language models and the future of disinformation as part of CyberScoop's CyberWeek.

In an opinion piece for Lawfare, Research Analyst Micah Musser discussed the new regulations that entered into effect in China requiring companies deploying recommendation algorithms to file details about those algorithms with the Cyberspace Administration of China.

Downrange: A Survey of China’s Cyber Ranges

Dakota Cary
| September 2022

China is rapidly building cyber ranges that allow cybersecurity teams to test new tools, practice attack and defense, and evaluate the cybersecurity of a particular product or service. The presence of these facilities suggests a concerted effort on the part of the Chinese government, in partnership with industry and academia, to advance technological research and upskill its cybersecurity workforce—more evidence that China has entered near-peer status with the United States in the cyber domain.

In an opinion piece for Scientific American, Dakota Cary discussed why civilian satellites must be designated as critical infrastructure.