Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

China’s Advanced AI Research

William Hannas Huey-Meei Chang Daniel Chou Brian Fleeger
| July 2022

China is following a national strategy to lead the world in artificial intelligence by 2030, including by pursuing “general AI” that can act autonomously in novel circumstances. Open-source research identifies 30 Chinese institutions engaged in one or more of this project‘s aspects, including machine learning, brain-inspired AI, and brain-computer interfaces. This report previews a CSET pilot program that will track China’s progress and provide timely alerts.

Applications and implications


China


Data, algorithms and models


International standing


Filter publications
Analysis

Downrange: A Survey of China’s Cyber Ranges

Dakota Cary
| September 2022

China is rapidly building cyber ranges that allow cybersecurity teams to test new tools, practice attack and defense, and evaluate the cybersecurity of a particular product or service. The presence of these facilities suggests a concerted effort on the part of the Chinese government, in partnership with industry and academia, to advance technological research and upskill its cybersecurity workforce—more evidence that China has entered near-peer status with the United States in the cyber domain.

Analysis

Will AI Make Cyber Swords or Shields?

Andrew Lohn Krystal Jackson
| August 2022

Funding and priorities for technology development today determine the terrain for digital battles tomorrow, and they provide the arsenals for both attackers and defenders. Unfortunately, researchers and strategists disagree on which technologies will ultimately be most beneficial and which cause more harm than good. This report provides three examples showing that, while the future of technology is impossible to predict with certainty, there is enough empirical data and mathematical theory to have these debates with more rigor.

CSET Senior Fellow Andrew Lohn testified before the House of Representatives Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection, and Innovation at a hearing on "Securing the Future: Harnessing the Potential of Emerging Technologies While Mitigating Security Risks." Lohn discussed the application of AI systems in cybersecurity and AI’s vulnerabilities.

CSET Senior Fellow Andrew Lohn testified before the House of Representatives Science, Space and Technology Subcommittee on Investigations and Oversight and Subcommittee on Research and Technology at a hearing on "Securing the Digital Commons: Open-Source Software Cybersecurity." Lohn discussed how the United States can maximize sharing within the artificial intelligence community while reducing risks to the AI supply chain.

CSET Senior Fellow Andrew Lohn testified before the U.S. Senate Armed Services Subcommittee on Cybersecurity hearing on artificial intelligence applications to operations in cyberspace. Lohn discussed AI's capabilities and vulnerabilities in cyber defenses and offenses.

Analysis

AI and Compute

Andrew Lohn Micah Musser
| January 2022

Between 2012 and 2018, the amount of computing power used by record-breaking artificial intelligence models doubled every 3.4 months. Even with money pouring into the AI field, this trendline is unsustainable. Because of cost, hardware availability and engineering difficulties, the next decade of AI can't rely exclusively on applying more and more computing power to drive further progress.

Analysis

AI and the Future of Disinformation Campaigns

Katerina Sedova Christine McNeill Aurora Johnson Aditi Joshi Ido Wulkan
| December 2021

Artificial intelligence offers enormous promise to advance progress and powerful capabilities to disrupt it. This policy brief is the second installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation campaigns. Building on the RICHDATA framework, this report describes how AI can supercharge current techniques to increase the speed, scale, and personalization of disinformation campaigns.

Analysis

Making AI Work for Cyber Defense

Wyatt Hoffman
| December 2021

Artificial intelligence will play an increasingly important role in cyber defense, but vulnerabilities in AI systems call into question their reliability in the face of evolving offensive campaigns. Because securing AI systems can require trade-offs based on the types of threats, defenders are often caught in a constant balancing act. This report explores the challenges in AI security and their implications for deploying AI-enabled cyber defenses at scale.

Analysis

AI and the Future of Disinformation Campaigns

Katerina Sedova Christine McNeill Aurora Johnson Aditi Joshi Ido Wulkan
| December 2021

Artificial intelligence offers enormous promise to advance progress, and powerful capabilities to disrupt it. This policy brief is the first installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns.

Analysis

Federal Prize Competitions

Ali Crawford Ido Wulkan
| November 2021

In science and technology, U.S. federal prize competitions are a way to promote innovation, advance knowledge, and solicit technological solutions to problems. In this report, the authors identify the unique advantages of such competitions over traditional R&D processes, and how these advantages might benefit artificial intelligence research.