Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

Acquiring AI Companies: Tracking U.S. AI Mergers and Acquisitions

Jack Corrigan, Ngor Luong, and Christian Schoeberl
| November 2024

Maintaining U.S. technological leadership in the years ahead will require policymakers to promote competition in the AI market and prevent industry leaders from wielding their power in harmful ways. This brief examines trends in U.S. mergers and acquisitions of artificial intelligence companies. The authors found that AI-related M&A deals have grown significantly over the last decade, with large U.S. tech companies being the most prolific acquirers of AI firms.

Data Snapshot

Funding the AI Cloud — Amazon, Alphabet, and Microsoft’s Cloud Computing Investments, Part 2

Christian Schoeberl and Jack Corrigan
| November 13, 2024

Data Snapshots are informative descriptions and quick analyses that dig into CSET’s unique data resources. This three-part series uses data from a variety of sources to track how three cloud providers—Amazon, Alphabet, and Microsoft—distribute their financial resources to create and sustain demand for their cloud services. By investing in data centers & workforce training, the large tech platforms of Amazon, Google, and Microsoft draw developers, companies, and governments to their tools & services.

Reports

Cybersecurity Risks of AI-Generated Code

Jessica Ji, Jenny Jun, Maggie Wu, and Rebecca Gelles
| November 2024

Artificial intelligence models have become increasingly adept at generating computer code. They are powerful and promising tools for software development across many industries, but they can also pose direct and indirect cybersecurity risks. This report identifies three broad categories of risk associated with AI code generation models and discusses their policy and cybersecurity implications.

Data Snapshot

Funding the AI Cloud — Amazon, Alphabet, and Microsoft’s Cloud Computing Investments, Part 1

Christian Schoeberl and Jack Corrigan
| October 30, 2024

Data Snapshots are informative descriptions and quick analyses that dig into CSET’s unique data resources. This three-part series uses data from a variety of sources to track how three cloud providers—Amazon, Alphabet, and Microsoft—distribute their financial resources to create and sustain demand for their cloud services. By investing in data centers & workforce training, the large tech platforms of Amazon, Google, and Microsoft draw developers, companies, and governments to their tools & services.

Reports

Fueling China’s Innovation: The Chinese Academy of Sciences and Its Role in the PRC’s S&T Ecosystem

Cole McFaul, Hanna Dohmen, Sam Bresnick, and Emily S. Weinstein
| October 2024

The Chinese Academy of Sciences is among the most important S&T organizations in the world and plays a key role in advancing Beijing’s S&T objectives. This report provides an in-depth look into the organization and its various functions within China’s S&T ecosystem, including advancing S&T research, fostering the commercialization of critical and emerging technologies, and contributing to S&T policymaking.

Reports

Through the Chat Window and Into the Real World: Preparing for AI Agents

Helen Toner, John Bansemer, Kyle Crichton, Matthew Burtell, Thomas Woodside, Anat Lior, Andrew Lohn, Ashwin Acharya, Beba Cibralic, Chris Painter, Cullen O’Keefe, Iason Gabriel, Kathleen Fisher, Ketan Ramakrishnan, Krystal Jackson, Noam Kolt, Rebecca Crootof, and Samrat Chatterjee
| October 2024

Computer scientists have long sought to build systems that can actively and autonomously carry out complicated goals in the real world—commonly referred to as artificial intelligence "agents." Recently, significant progress in large language models has fueled new optimism about the prospect of building sophisticated AI agents. This CSET-led workshop report synthesizes findings from a May 2024 workshop on this topic, including what constitutes an AI agent, how the technology is improving, what risks agents exacerbate, and intervention points that could help.

Reports

Securing Critical Infrastructure in the Age of AI

Kyle Crichton, Jessica Ji, Kyle Miller, John Bansemer, Zachary Arnold, David Batz, Minwoo Choi, Marisa Decillis, Patricia Eke, Daniel M. Gerstein, Alex Leblang, Monty McGee, Greg Rattray, Luke Richards, and Alana Scott
| October 2024

As critical infrastructure operators and providers seek to harness the benefits of new artificial intelligence capabilities, they must also manage associated risks from both AI-enabled cyber threats and potential vulnerabilities in deployed AI systems. In June 2024, CSET led a workshop to assess these issues. This report synthesizes our findings, drawing on lessons from cybersecurity and insights from critical infrastructure sectors to identify challenges and potential risk mitigations associated with AI adoption.

Reports

Governing AI with Existing Authorities

Jack Corrigan, Owen Daniels, Lauren Kahn, and Danny Hague
| July 2024

A core question in policy debates around artificial intelligence is whether federal agencies can use their existing authorities to govern AI or if the government needs new legal powers to manage the technology. The authors argue that relying on existing authorities is the most effective approach to promoting the safe development and deployment of AI systems, at least in the near term. This report outlines a process for identifying existing legal authorities that could apply to AI and highlights areas where additional legislative or regulatory action may be needed.

Reports

Enabling Principles for AI Governance

Owen Daniels and Dewey Murdick
| July 2024

How to govern artificial intelligence is a concern that is rightfully top of mind for lawmakers and policymakers.To govern AI effectively, regulators must 1) know the terrain of AI risk and harm by tracking incidents and collecting data; 2) develop their own AI literacy and build better public understanding of the benefits and risks; and 3) preserve adaptability and agility by developing policies that can be updated as AI evolves.

Data Snapshot

Pushing the Limits: Huawei’s AI Chip Tests U.S. Export Controls

Jacob Feldgoise and Hanna Dohmen
| June 17, 2024

Since 2019, the U.S. government has imposed restrictive export controls on Huawei—one of China’s leading tech giants—seeking, in part, to hinder the company’s AI chip development efforts. This data snapshot reveals how exactly Huawei’s latest AI chip—the Ascend 910B—improves on the prior generation and demonstrates how export controls are likely hindering Huawei’s production.