Showcasing our researchers’ work and their latest media appearances as they weigh in on developments at the intersection of national security and emerging technology.
The Washington Post
The Washington Post spoke with CSET Deputy Director of Analysis Margarita Konaev on the expected uses of artificial intelligence technology in the new year. According to Konaev, “this year will bring more use of artificial intelligence software in war, particularly for software that helps soldiers recognize objects and their location. Leaders will probably use it more for decision-making in battlefield operations, equipment maintenance and supply chain management.” She added that these models could actually perform better, because they will have so much data generated from the war in Ukraine last year to help feed and train them.
arXiV
In a new report posted on arXiv, authors at OpenAI, the Stanford Internet Observatory, and CSET’s Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova explored how large language models (LLMs) could be misused to generate and deploy digital propaganda for influence operations in the future. The authors provided a framework for assessing potential mitigation strategies and published a summary of their report’s analysis in a blog post. “Our bottom-line judgment is that language models will be useful for propagandists and will likely transform online influence operations,” they noted. “Even if the most advanced models are kept private or controlled through application programming interface (API) access, propagandists will likely gravitate towards open-source alternatives and nation states may invest in the technology themselves.”
CyberScoop
CyberScoop reached out to Research Fellow Josh Goldstein to learn more about the above-mentioned new report done in collaboration with OpenAI and the Stanford Internet Observatory. While there is no evidence that LLMs have been used for influence operations, the authors offer a variety of interventions to mitigate any threats. “We don’t want to wait until these models are deployed for influence operations at scale before we start to consider mitigations,” Goldstein said. He also cautioned that it is important not to overstate the threat posed by these models in revolutionizing influence operations: “I think it’s critical that we don’t engage in threat inflation. Just because we assess that language models will be useful does not mean that all influence operations using language models will automatically have a big effect.” The report was also featured in stories by Grid, Platformer, VentureBeat, Interesting Engineering and Business Insider India, as well as an opinion piece on Medium, among other outlets.
BBC World News
In a live TV interview with the BBC’s Asia Business Report, Research Analyst Hanna Dohmen discussed China’s new regulations on deepfake technology. “These regulations also fit into a broad trend of Beijing’s efforts to prevent political and social disruption by increasing content regulation and enhancing censorship efforts,” Dohmen said. China’s move to regulate deepfakes is politically significant since “these are the first in the world we’ve seen. This does not mean that other countries are not grappling with this issue, but we are seeing that China is really kind of setting the standard internationally and providing a potential framework for other countries to potentially approach similar regulation,” according to Dohmen. On the strength of that interview, she was invited to appear the next day on a leading Australian TV network; ABC News Breakfast asked Dohmen to elaborate on the significance of China’s latest moves to regulate deepfakes.
Lawfare
In an opinion piece for Lawfare, Andrew W. Marshall Fellow and Policy Communications Specialist Owen Daniels explained the “revolution in military affairs” (RMA) framework, a mental model evaluating technology’s effect on warfare, and how it can address artificial intelligence’s impact on national security. Drawing from his report, Daniels addressed misunderstandings surrounding the RMA framework and how it “remains a powerful analytical tool for thinking about the relationships among military technologies, operations, and organization. It can help policymakers and analysts think specifically and systematically about the ways technology impacts warfare.” By using the RMA framework, policymakers can understand why AI may not immediately spark a revolution in military affairs, how it might in the future, and AI’s potential impact on future warfare.
GCN
After Research Analyst Jack Corrigan participated in a GCN webinar on Supply Chain Vulnerabilities earlier this week, Chris Teale recapped his comments and the findings of Corrigan’s October report with Sergio Fontanez and Michael Kratsios, Banned in D.C.: Examining Government Approaches to Foreign Technology Threats. Their report found that more than 1,600 state and local agencies had purchased Chinese-made IT and communications equipment and services that were prohibited at the federal level. During the webinar, Corrigan said state governments should work to align their procurement practices with those of the federal government. “If the goal is to keep this untrustworthy foreign technology from entering our critical systems and networks more broadly, we need to have a more cohesive, nationwide approach that involves the federal government, as well as other levels of government and then private industry,” Corrigan said.
Spotlight on CSET Experts: Josh A. Goldstein
Josh A. Goldstein is a Research Fellow on CSET’s CyberAI Project. Prior to joining CSET, he was a pre- and postdoctoral fellow at the Stanford Internet Observatory. His research has included investigating covert influence operations on social media platforms, studying the effects of foreign interference on democratic societies, and exploring how emerging technologies will impact the future of propaganda campaigns. He has been published in outlets including Brookings, Lawfare, Foreign Policy, CyberScoop and the Harvard Misinformation Review.
Interested in speaking with Josh or our other experts? Contact our Director of External Affairs, Lynne Weil, at Lynne.Weil@georgetown.edu.
Want to stay up to date on the latest CSET research? Sign up for our day-of-release reports and take a look at our biweekly newsletter, policy.ai.