Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

Will AI Make Cyber Swords or Shields

Andrew Lohn
| July 27, 2022

We aim to demonstrate the value of mathematical models for policy debates about technological progress in cybersecurity by considering phishing, vulnerability discovery, and the dynamics between patching and exploitation. We then adjust the inputs to those mathematical models to match some possible advances in their underlying technology.

Adversarial patches are images designed to fool otherwise well-performing neural network-based computer vision models. Although these attacks were initially conceived of and studied digitally, in that the raw pixel values of the image were perturbed, recent work has demonstrated that these attacks can successfully transfer to the physical world. This can be accomplished by printing out the patch and adding it into scenes of newly captured images or video footage.

CSET Senior Fellow Andrew Lohn testified before the House of Representatives Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection, and Innovation at a hearing on "Securing the Future: Harnessing the Potential of Emerging Technologies While Mitigating Security Risks." Lohn discussed the application of AI systems in cybersecurity and AI’s vulnerabilities.

Reports

Quad AI

Husanjot Chahal, Ngor Luong, Sara Abdulla, and Margarita Konaev
| May 2022

Through the Quad forum, the United States, Australia, Japan and India have committed to pursuing an open, accessible and secure technology ecosystem and offering a democratic alternative to China’s techno-authoritarian model. This report assesses artificial intelligence collaboration across the Quad and finds that while Australia, Japan and India each have close AI-related research and investment ties to both the United States and China, they collaborate far less with one another.

CSET Senior Fellow Andrew Lohn testified before the House of Representatives Science, Space and Technology Subcommittee on Investigations and Oversight and Subcommittee on Research and Technology at a hearing on "Securing the Digital Commons: Open-Source Software Cybersecurity." Lohn discussed how the United States can maximize sharing within the artificial intelligence community while reducing risks to the AI supply chain.

CSET Senior Fellow Andrew Lohn testified before the U.S. Senate Armed Services Subcommittee on Cybersecurity hearing on artificial intelligence applications to operations in cyberspace. Lohn discussed AI's capabilities and vulnerabilities in cyber defenses and offenses.

Data Snapshot

Examining Patent Data in PARAT  

Sara Abdulla
| March 30, 2022

Data Snapshots are informative descriptions and quick analyses that dig into CSET’s unique data resources. This is the fourth in a series of Snapshots exploring CSET’s Private-sector AI-Related Activity Tracker (PARAT). Check in every two weeks to see our newest Snapshot, and explore PARAT, which collects data related to companies’ AI research and development to inform analysis of the global AI sector.

Reports

Securing AI

Andrew Lohn and Wyatt Hoffman
| March 2022

Like traditional software, vulnerabilities in machine learning software can lead to sabotage or information leakages. Also like traditional software, sharing information about vulnerabilities helps defenders protect their systems and helps attackers exploit them. This brief examines some of the key differences between vulnerabilities in traditional and machine learning systems and how those differences can affect the vulnerability disclosure and remediation processes.

CSET Research Analyst Dakota Cary testified before the U.S.-China Economic and Security Review Commission hearing on "China’s Cyber Capabilities: Warfare, Espionage, and Implications for the United States." Cary discussed the cooperative relationship between Chinese universities and China’s military and intelligence services to develop talent with the capabilities to perform state-sponsored cyberespionage operations.

Reports

AI and Compute

Andrew Lohn and Micah Musser
| January 2022

Between 2012 and 2018, the amount of computing power used by record-breaking artificial intelligence models doubled every 3.4 months. Even with money pouring into the AI field, this trendline is unsustainable. Because of cost, hardware availability and engineering difficulties, the next decade of AI can't rely exclusively on applying more and more computing power to drive further progress.