Applications

CSET Senior Fellow Andrew Lohn weighs the strengths and weakness of AI used in cybersecurity.

Can AI write believable misinformation?

Government Technology
| May 24, 2021

Using GPT-3, CSET was able to generate written messages of misinformation disguised as a human.

Using an AI algorithm called GPT-3, CSET was able to generate misinformation.

CSET report examines how GPT-3, a new AI system, has the capability to automate the future of disinformation campaigns.

Vice featured CSET's report, "Truth, Lies, and Automation," which discusses how language processing models could be used to automate and fuel disinformation campaigns.

Behind India’s AI Patent Boom

Analytics India Magazine
| May 20, 2021

Analytics India highlighted CSET research studying the AI patent boom in India and around the world.

Axios Future featured CSET's report, "Truth, Lies, and Automation," which discovered GPT-3's startling ability to potentially fuel automated disinformation campaigns.

Truth, Lies, and Automation

Ben Buchanan, Andrew Lohn, Micah Musser, and Katerina Sedova
| May 2021

Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.

Ryan Fedasiuk's research on China's media manipulation found that 20 million part-time volunteers and 2 million paid commentators have been recruited to alter China's public opinion online.

In collaborations with Partnership on AI, CSET's AI Incident Database has documented 1,200 cases of AI system failures.