Applications

AI could scale up disinformation campaigns, researchers warn

Global Government Forum
| May 27, 2021

CEST's use of GPT-3 reveals an inevitable future where automated messages can create disinformation campaigns online.

CSET's Tim Hwang was invited to join the Federal Drive podcast to discuss tech companies' investments in AI and its misalignment with national priorities.

CSET Senior Fellow Andrew Lohn weighs the strengths and weakness of AI used in cybersecurity.

Can AI write believable misinformation?

Government Technology
| May 24, 2021

Using GPT-3, CSET was able to generate written messages of misinformation disguised as a human.

Using an AI algorithm called GPT-3, CSET was able to generate misinformation.

CSET report examines how GPT-3, a new AI system, has the capability to automate the future of disinformation campaigns.

Vice featured CSET's report, "Truth, Lies, and Automation," which discusses how language processing models could be used to automate and fuel disinformation campaigns.

Behind India’s AI Patent Boom

Analytics India Magazine
| May 20, 2021

Analytics India highlighted CSET research studying the AI patent boom in India and around the world.

Axios Future featured CSET's report, "Truth, Lies, and Automation," which discovered GPT-3's startling ability to potentially fuel automated disinformation campaigns.

Truth, Lies, and Automation

Ben Buchanan, Andrew Lohn, Micah Musser, and Katerina Sedova
| May 2021

Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.