A new report lays out the ways that cutting-edge text-generating AI models could be used to aid disinformation campaigns.
Why it matters: In the wrong hands text-generating systems could be used to scale up state-sponsored disinformation efforts — and humans would struggle to know when they’re being lied to.
How it works: Text-generating models like OpenAI’s leading GPT-3 are trained on vast volumes of internet data, and learn to write eerily life-like text off human prompts.
- In their new report released this morning, researchers from Georgetown’s Center for Security and Emerging Technology (CSET) examined how GPT-3 might be used to turbocharge disinformation campaigns like the one carried out by Russia’s Internet Research Agency (IRA) during the 2016 election.
What they found: While “no currently existing autonomous system could replace the entirety of the IRA,”algorithmically based tech paired with experienced human operators produces results that are nothing less than frightening.
Read the full article at Axios.