Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

CSET’s 2024 Annual Report

Center for Security and Emerging Technology
| March 2025

In 2024, CSET continued to deliver impactful, data-driven analysis at the intersection of emerging technology and security policy. Explore our annual report to discover key research highlights, expert testimony, and new analytical tools — all aimed at shaping informed, strategic decisions around AI and emerging tech.

Filter publications
Data Snapshot

Pushing the Limits: Huawei’s AI Chip Tests U.S. Export Controls

Jacob Feldgoise and Hanna Dohmen
| June 17, 2024

Since 2019, the U.S. government has imposed restrictive export controls on Huawei—one of China’s leading tech giants—seeking, in part, to hinder the company’s AI chip development efforts. This data snapshot reveals how exactly Huawei’s latest AI chip—the Ascend 910B—improves on the prior generation and demonstrates how export controls are likely hindering Huawei’s production.

Reports

Trust Issues: Discrepancies in Trustworthy AI Keywords Use in Policy and Research

Emelia Probasco, Kathleen Curlee, and Autumn Toney
| June 2024

Policy and research communities strive to mitigate AI harm while maximizing its benefits. Achieving effective and trustworthy AI necessitates the establishment of a shared language. The analysis of policies across different countries and research literature identifies consensus on six critical concepts: accountability, explainability, fairness, privacy, security, and transparency.

Reports

Key Concepts in AI Safety: Reliable Uncertainty Quantification in Machine Learning

Tim G. J. Rudner and Helen Toner
| June 2024

This paper is the fifth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. This paper explores the opportunities and challenges of building AI systems that “know what they don’t know.”

CSET Non-Resident Senior Fellow Kevin Wolf testified before the US-China Economic and Security Review Commission on economic competition with China.

Reports

Putting Teeth into AI Risk Management

Matthew Schoemaker
| May 2024

President Biden's October 2023 executive order prioritizes the governance of artificial intelligence in the federal government, prompting the urgent creation of AI risk management standards and procurement guidelines. Soon after the order's signing, the Office of Management and Budget issued guidance for federal departments and agencies, including minimum risk standards for AI in federal contracts. Similar to cybersecurity, procurement rules will be used to enforce AI development best practices for federal suppliers. This report offers recommendations for implementing AI risk management procurement rules.

Reports

China, Biotechnology, and BGI

Anna Puglisi and Chryssa Rask
| May 2024

As the U.S. government considers banning genomics companies from China, it opens a broader question about how the United States and other market economies should deal with China’s “national champions.” This paper provides an overview of one such company—BGI—and how China’s industrial policy impacts technology development in China and around the world.

Formal Response

Comment on BIS Request for Information

Jacob Feldgoise and Hanna Dohmen
| April 30, 2024

Jacob Feldgoise and Hanna Dohmen at the Center for Security and Emerging Technology (CSET) at Georgetown University offer the following response to the Bureau of Industry and Security’s (BIS) Notice of Proposed Rulemaking (NPRM): Taking Additional Steps To Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities (89 FR 5698).

Reports

An Argument for Hybrid AI Incident Reporting

Ren Bin Lee Dixon and Heather Frase
| March 2024

Artificial Intelligence incidents have been occurring with the rapid advancement of AI capabilities over the past decade. However, there is not yet a concerted policy effort in the United States to monitor, document, and aggregate AI incident data to enhance the understanding of AI-related harm and inform safety policies. This report proposes a federated approach consisting of hybrid incident reporting frameworks to standardize reporting practices and prevent missing data.

Formal Response

Comment on NIST RFI Related to the Executive Order Concerning Artificial Intelligence (88 FR 88368)

Mina Narayanan, Jessica Ji, and Heather Frase
| February 2, 2024

On February 2, 2024, CSET's Assessment and CyberAI teams submitted a response to NIST's Request for Information related to the Executive Order Concerning Artificial Intelligence (88 FR 88368). In the submission, CSET compiles recommendations from six CSET reports and analyses in order to assist NIST in its implementation of AI Executive Order requirements.

CSET's Ngor Luong testified before the U.S.-China Economic and Security Review Commission where she discussed Chinese investments in military applications of AI.