Cybersecurity

Militaries seek to harness artificial intelligence for decision advantage. Yet AI systems introduce a new source of uncertainty in the likelihood of technical failures. Such failures could interact with strategic and human factors in ways that lead to miscalculation and escalation in a crisis or conflict. Harnessing AI effectively requires managing these risk trade-offs by reducing the likelihood, and containing the consequences of, AI failures.

BBC News cited a report authored by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in partnership with OpenAI and Stanford Internet Observatory. Alongside the report, BBC News quoted Josh Goldstein regarding the current status of AI systems.

A report by CSET’s Emily S. Weinstein and Ngor Luong, was cited in an article published by Roll Call. The report identifies the main U.S. investors active in the Chinese artificial intelligence market and the set of AI companies in China that have benefitted from U.S. capital.

Russian hackers are using ChatGPT to write malicious pieces of code

Interesting Engineering
| January 16, 2023

An article on Interesting Engineering referenced a report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in partnership with OpenAI and Stanford Internet Observatory. The report examines the potential misuse of language models in influence operations in the future and offers a framework for evaluating potential countermeasures.

A report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory was cited in an article published by Grid. The report examines the potential misuse of language models for influence operations in the future and proposes a structure for evaluating possible solutions to this problem.

A report by CSET's Jack Corrigan was cited in a GCN article. The report presents an outline of procurement bans at the federal and state levels and suggests measures to enhance defenses against threats from foreign technology.

A report by Jack Corrigan from CSET was referenced in an opinion piece published in the Sun Herald. The report presents an outline of procurement bans at the federal and state levels and suggests measures to enhance defenses against threats from foreign technology.

A report by CSET’s Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory was cited in an article published on Medium. The report explores how language models could be misused for influence operations in the future, and it provides a framework for assessing potential mitigation strategies.

Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk

Josh A. Goldstein Girish Sastry Micah Musser Renée DiResta Matthew Gentzel Katerina Sedova
| January 2023

Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI, and the Stanford Internet Observatory explore how language models could be misused for influence operations in the future, and provide a framework for assessing potential mitigation strategies.

In an interview with CyberScoop, Research Fellow Josh A. Goldstein discussed his research, in collaboration with Open AI and Stanford's Internet Observatory, on the use of large language models to deploy propaganda.