Disinformation

In an article published by NPR that discusses the accessibility and potential impact of generative artificial intelligence applications on political campaigns and public opinion, CSET's Micah Musser provided his expert insights.

CSET's Andrew Lohn and Joshua A. Goldstein share their insights on the difficulties of identifying AI-generated text in disinformation campaigns in their op-ed in Lawfare.

CSET's Jenny Jun was featured in the Atlantic Council's The 5x5, a series that showcases five experts answering five questions on a common theme, trend, or current event in the world of cyber.

A report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory was cited in an article published by Forbes.

The Coming Age of AI-Powered Propaganda

Foreign Affairs
| April 7, 2023

CSET's Josh A. Goldstein and OpenAI's Girish Sastry co-authored an insightful article on language models and disinformation that was published in Foreign Affairs.

NPR published an article featuring CSET's Josh Goldstein. Goldstein provided expert insight on the topic.

CSET's Josh A. Goldstein was recently quoted in a WIRED article about state-backed hacking groups using fake LinkedIn profiles to steal information from their targets. Goldstein provides insight by highlighting the issues in the disinformation space.

An article published in OODA Loop cited a report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory. The report explores the potential misuse of language models for influence operations in the future, and provides a framework for assessing mitigation strategies.

BBC News cited a report authored by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in partnership with OpenAI and Stanford Internet Observatory. Alongside the report, BBC News quoted Josh Goldstein regarding the current status of AI systems.

WIRED highlighted CSET Research Analyst Micah Musser in an article that references a report published by CSET, in collaboration with OpenAI and Stanford Internet Observatory. The report examines the potential misuse of language models in influence operations in the future and offers a framework for evaluating potential countermeasures.