Reports

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Reports

AI and the Future of Workforce Training

Matthias Oschinski, Ali Crawford, and Maggie Wu
| December 2024

The emergence of artificial intelligence as a general-purpose technology could profoundly transform work across industries, potentially affecting a variety of occupations. While previous technological shifts largely enhanced productivity and wages for white-collar workers but led to displacement pressures for blue-collar workers, AI may significantly disrupt both groups. This report examines the changing landscape of workforce development, highlighting the crucial role of community colleges, alternative career pathways, and AI-enabled training solutions in preparing workers for this transition.

Data Visualization

ETO AGORA

December 2024

The Emerging Technology Observatory’s AGORA (AI GOvernance and Regulatory Archive) is a living collection of AI-relevant laws, regulations, standards, and other governance documents from the United States and around the world. Updated regularly, AGORA includes summaries, document text, thematic tags, and filters to help users quickly discover and analyze key developments in AI governance.

Reports

Staying Current with Emerging Technology Trends: Using Big Data to Inform Planning

Emelia Probasco and Christian Schoeberl
| December 2024

This report proposes an approach to systematically identify promising research using big data and analyze that research’s potential impact through structured engagements with subject-matter experts. The methodology offers a structured way to proactively monitor the research landscape and inform strategic R&D priorities.

Read our translation of a draft Chinese national standard addressing the safety and security of generative AI services.

Read our translation of a notice from China’s Ministry of Commerce that bans the export of gallium, germanium, antimony, and superhard materials to the United States.

Formal Response

RFI Response: Safety Considerations for Chemical and/or Biological AI Models

Steph Batalis and Vikram Venkatram
| December 3, 2024

Dr. Steph Batalis and Vikram Venkatram offered the following comment in response to the National Institute of Standards and Technology's request for information on safety considerations for chemical and biological AI models.

Artificial intelligence (AI) tools pose exciting possibilities to advance scientific, biomedical, and public health research. At the same time, these tools have raised concerns about their potential to contribute to biological threats, like those from pathogens and toxins. This report describes pathways that result in biological harm, with or without AI, and a range of governance tools and mitigation measures to address them.

Data Snapshot

Funding the AI Cloud — Amazon, Alphabet, and Microsoft’s Cloud Computing Investments, Part 3

Christian Schoeberl and Jack Corrigan
| November 20, 2024

Data Snapshots are informative descriptions and quick analyses that dig into CSET’s unique data resources. This three-part series uses data from a variety of sources to track how three cloud providers—Amazon, Alphabet, and Microsoft—distribute their financial resources to create and sustain demand for their cloud services. By investing in data centers & workforce training, the large tech platforms of Amazon, Google, and Microsoft draw developers, companies, and governments to their tools & services.

Reports

AI Safety and Automation Bias

Lauren Kahn, Emelia Probasco, and Ronnie Kinoshita
| November 2024

Automation bias is a critical issue for artificial intelligence deployment. It can cause otherwise knowledgeable users to make crucial and even obvious errors. Organizational, technical, and educational leaders can mitigate these biases through training, design, and processes. This paper explores automation bias and ways to mitigate it through three case studies: Tesla’s autopilot incidents, aviation incidents at Boeing and Airbus, and Army and Navy air defense incidents.

Sam Bresnick testified before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law regarding tech companies' ties to China and their implications in a future conflict scenario.