Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

The State of AI-Related Apprenticeships

Luke Koslosky and Jacob Feldgoise
| February 2025

As artificial intelligence permeates the economy, the demand for AI talent with all levels of educational attainment will expand in kind. Apprenticeships are an effective education and training pathway for other industries, but are they suitable for AI-related roles? This report analyzes trends in AI-related apprenticeships across the United States from 2013 through 2023. It explores the growth of these programs, completion rates, demographic and geographic information, and the organizations sponsoring these programs.

Reports

Chinese Critiques of Large Language Models

William Hannas, Huey-Meei Chang, Maximilian Riesenhuber, and Daniel Chou
| January 2025

Large generative models are widely viewed as the most promising path to general (human-level) artificial intelligence and attract investment in the billions of dollars. The present enthusiasm notwithstanding, a chorus of ranking Chinese scientists regard this singular approach to AGI as ill-advised. This report documents these critiques in China’s research, public statements, and government planning, while pointing to additional, pragmatic reasons for China’s pursuit of a diversified research portfolio.

Reports

AI and the Future of Workforce Training

Matthias Oschinski, Ali Crawford, and Maggie Wu
| December 2024

The emergence of artificial intelligence as a general-purpose technology could profoundly transform work across industries, potentially affecting a variety of occupations. While previous technological shifts largely enhanced productivity and wages for white-collar workers but led to displacement pressures for blue-collar workers, AI may significantly disrupt both groups. This report examines the changing landscape of workforce development, highlighting the crucial role of community colleges, alternative career pathways, and AI-enabled training solutions in preparing workers for this transition.

Formal Response

RFI Response: Safety Considerations for Chemical and/or Biological AI Models

Steph Batalis and Vikram Venkatram
| December 3, 2024

Dr. Steph Batalis and Vikram Venkatram offered the following comment in response to the National Institute of Standards and Technology's request for information on safety considerations for chemical and biological AI models.

Artificial intelligence (AI) tools pose exciting possibilities to advance scientific, biomedical, and public health research. At the same time, these tools have raised concerns about their potential to contribute to biological threats, like those from pathogens and toxins. This report describes pathways that result in biological harm, with or without AI, and a range of governance tools and mitigation measures to address them.

Reports

AI Safety and Automation Bias

Lauren Kahn, Emelia Probasco, and Ronnie Kinoshita
| November 2024

Automation bias is a critical issue for artificial intelligence deployment. It can cause otherwise knowledgeable users to make crucial and even obvious errors. Organizational, technical, and educational leaders can mitigate these biases through training, design, and processes. This paper explores automation bias and ways to mitigate it through three case studies: Tesla’s autopilot incidents, aviation incidents at Boeing and Airbus, and Army and Navy air defense incidents.

Reports

Acquiring AI Companies: Tracking U.S. AI Mergers and Acquisitions

Jack Corrigan, Ngor Luong, and Christian Schoeberl
| November 2024

Maintaining U.S. technological leadership in the years ahead will require policymakers to promote competition in the AI market and prevent industry leaders from wielding their power in harmful ways. This brief examines trends in U.S. mergers and acquisitions of artificial intelligence companies. The authors found that AI-related M&A deals have grown significantly over the last decade, with large U.S. tech companies being the most prolific acquirers of AI firms.

Reports

Fueling China’s Innovation: The Chinese Academy of Sciences and Its Role in the PRC’s S&T Ecosystem

Cole McFaul, Hanna Dohmen, Sam Bresnick, and Emily S. Weinstein
| October 2024

The Chinese Academy of Sciences is among the most important S&T organizations in the world and plays a key role in advancing Beijing’s S&T objectives. This report provides an in-depth look into the organization and its various functions within China’s S&T ecosystem, including advancing S&T research, fostering the commercialization of critical and emerging technologies, and contributing to S&T policymaking.

Reports

Governing AI with Existing Authorities

Jack Corrigan, Owen Daniels, Lauren Kahn, and Danny Hague
| July 2024

A core question in policy debates around artificial intelligence is whether federal agencies can use their existing authorities to govern AI or if the government needs new legal powers to manage the technology. The authors argue that relying on existing authorities is the most effective approach to promoting the safe development and deployment of AI systems, at least in the near term. This report outlines a process for identifying existing legal authorities that could apply to AI and highlights areas where additional legislative or regulatory action may be needed.

Reports

Enabling Principles for AI Governance

Owen Daniels and Dewey Murdick
| July 2024

How to govern artificial intelligence is a concern that is rightfully top of mind for lawmakers and policymakers.To govern AI effectively, regulators must 1) know the terrain of AI risk and harm by tracking incidents and collecting data; 2) develop their own AI literacy and build better public understanding of the benefits and risks; and 3) preserve adaptability and agility by developing policies that can be updated as AI evolves.