This is Part 2 of a two-part series. Click here for Part 1.
Executive Summary
The age of information enabled the age of disinformation. Powered by the speed and volume of the internet, disinformation has emerged as an instrument of strategic competition and domestic political warfare. It is used by both state and non-state actors to shape public opinion, sow chaos, and erode societal trust. Artificial intelligence (AI), specifically machine learning (ML), is poised to amplify disinformation campaigns—influence operations that involve covert efforts to intentionally spread false or misleading information.
In this series, we offer a systematic examination of how AI/ML technologies could enhance these operations. Part 1 of the series described the stages and common techniques of disinformation campaigns. In this paper, we examine how AI/ML technologies can enhance specific disinformation techniques and how these technologies may exacerbate current trends and shape future campaigns.
Our findings show that the use of AI in disinformation campaigns is not only plausible but already underway. Powered by computing, ML algorithms excel at harnessing data and finding patterns that are difficult for humans to observe. The data-rich environment of modern online existence creates a terrain ideally suited for ML techniques to precisely target individuals. Language generation capabilities and the tools that enable deepfakes are already capable of manufacturing viral disinformation at scale and empowering digital impersonation. The same technologies, paired with human operators, may soon enable social bots to mimic human online behavior and to troll humans with precisely tailored messages. These risks may be exacerbated by several trends: the blurring lines between foreign and domestic influence operations, the outsourcing of these operations to private companies that provide influence as a service, and the conflict over distinguishing harmful disinformation and protected speech.
We conclude that a future of AI-powered campaigns is likely inevitable. However, this future might not be altogether disruptive if societies act now. Mitigating and countering disinformation is a whole-of-society effort, where governments, technology platforms, AI researchers, the media, and individual information consumers each bear responsibility.
Our key recommendations include:
Develop technical mitigations to inhibit and detect ML-powered disinformation campaigns. Social media companies and Congress should inhibit access to user data by threat actors and their proxies. The U.S. government and the private sector should increase transparency through interoperable standards for detection, forensics, and digital provenance of synthetic media. Chatbots should be labeled so that humans know when they are engaging with an AI system.
Develop an early warning system for disinformation campaigns. Expand cooperation and intelligence sharing between the federal government, industry partners, state and local governments, and likeminded democratic nations to develop a common operational picture and detect the use of novel ML-enabled techniques, enabling rapid response.
Build a networked collective defense across platforms. Online platforms are in the best position to discover and report on known campaigns. Because these campaigns may occur across multiple platforms it’s important to share information quickly to enable coordinated responses. All platforms, regardless of size, should increase transparency and accountability by establishing policies and processes to discover, disrupt, and report on disinformation campaigns. Congress should remove impediments to sharing threat information while enabling counter-disinformation research. Platforms and researchers should formalize mechanisms for cross-platform collaboration and sharing threat information.
Examine and deter the use of services that enable disinformation campaigns. As ML-enabled content generation tools proliferate, they will be adopted by influence-as-a-service entities, further increasing the scale of AI-generated political discourse. Congress should examine the current use of these tools by firms providing influence for hire. It should build norms to discourage their use by candidates for public office.
Integrate threat modeling and red-teaming processes to guard against abuse. Platforms and AI researchers should adapt cybersecurity best practices to disinformation operations, adopt them into the early stages of product design, and test potential mitigations prior to their release.
Build and apply ethical principles for the publication of AI research that can fuel disinformation campaigns. The AI research community should assume that disinformation operators will misuse their openly released research. They should develop a publication risk framework to guard against the misuse of their research and recommend mitigations.
Establish a process for the media to report on disinformation without amplifying it. Traditional media organizations should use threat modeling to examine how the flow of information to them can be exploited by disinformation actors and build processes to guard against unwittingly amplifying disinformation campaigns.
Reform recommender algorithms that have empowered current campaigns. Platforms should increase transparency and access to vetted researchers to audit and help understand how recommendation algorithms make decisions and can be manipulated by threat actors. They should invest in solutions to counter the creation of an information bubble effect that contributes to polarization.
Raise awareness and build public resilience against ML-enabled disinformation. The U.S. government, social media platforms, state and local governments, and civil society should develop school and adult education programs and arm frequently targeted communities with tools to discern ML-enabled disinformation techniques.
AI-enabled disinformation campaigns present a growing threat to the epistemic security of democratic societies. Our report focuses on the social media and online information environment because they will be primarily impacted by AI-enabled disinformation operations. They are part of a larger challenge that has undermined societal trust in government and the information upon which democracies rely. While these recommendations may help stem the tide, the ultimate line of defense against automated disinformation is composed of discerning humans on the receiving end of the message. Efforts to help the public detect disinformation and the campaigns that spread it are critical to building resilience and undermining this threat.