Tag Archive: Data

CSET’s Steph Batalis, Katherine Quinn, and Rebecca Gelles shared their expert analysis in an op-ed published by Barron's. Their piece examines the economic and scientific impact of proposed funding cuts to the National Institutes of Health (NIH), arguing that NIH-backed research plays a foundational role in driving medical innovation, biotechnology growth, and U.S. competitiveness.

CSET’s Catherine Aiken shared her expert insight in an article published by Nature. The article explores an open-access dataset called Cosmos 1.0, published in Scientific Data, which uses a Wikipedia-based AI model to identify the "Momentum 100," a data-driven list of rapidly emerging technologies such as reinforcement learning, blockchain, and 3D printing.

Mapping the AI Governance Landscape: April 2026 Update

MIT AI Risk Repository
| April 9, 2026

🔔 The number of AI-related governance documents continues to grow rapidly, but what risks, mitigations, and other concepts do these documents actually cover?

MIT AI Risk Initiative researchers expanded their pipeline with CSET to map over 1,000 AI governance documents from the AGORA dataset to several extensible taxonomies. These taxonomies cover AI risks, actors, industry sectors, AI lifecycle stages, legislative status, and AI system technical scope, complementing AGORA’s thematic taxonomy of risk factors, harms, governance strategies, incentives for compliance, and application areas.

Jack Le is an Intern at Georgetown University's Center for Security and Emerging Technology (CSET).

Mapping the AI Governance Landscape

MIT AI Risk Repository
| October 15, 2025

🔔 The number of AI-related governance documents is rapidly proliferating, but what risks, mitigations, and other concepts do these documents actually cover?

MIT AI Risk Initiative researchers Simon Mylius, Peter Slattery, Yan Zhu, Alexander Saeri, Jess Graham, Michael Noetel, and Neil Thompson teamed up with CSET’s Mina Narayanan and Adrian Thinnyun to pilot an approach to map over 950 AI governance documents to several extensible taxonomies. These taxonomies cover AI risks and actors, industry sectors targeted, and other AI-related concepts, complementing AGORA’s thematic taxonomy of risk factors, harms, governance strategies, incentives for compliance, and application areas.

In the second installation of our blog series analyzing 147 AI-related laws enacted by Congress between January 2020 and March 2025 from AGORA, we explore the governance strategies, risk-related concepts, and harms addressed in the legislation. In the first blog, we showed that the majority of these AI-related legislative documents were drawn from National Defense Authorization Acts and apply to national security contexts.

Exploring AI legislation in Congress with AGORA: Origin and Application Domains

Mina Narayanan and Sonali Subbu Rathinam
| July 23, 2025

In this two-part analysis, we use data from the Emerging Technology Observatory's AGORA to explore AI-related legislation that was enacted by Congress between January 2020 and March 2025. This first blog explores the origin and application domains of the AI-related legislation we reviewed. The second blog examines the governance strategies, risk-related concepts, and harms covered by this legislation.

Identifying AI Research

Christian Schoeberl, Autumn Toney, and James Dunham
| July 2023

The choice of method for surfacing AI-relevant publications impacts the ultimate research findings. This report provides a quantitative analysis of various methods available to researchers for identifying AI-relevant research within CSET’s merged corpus, and showcases the research implications of each method.

Brian Love is a Senior Software Engineer at the Center for Security and Emerging Technology, where he works on the Emerging Technology Observatory initiative.

Introducing the Emerging Technology Observatory

Emerging Technology Observatory
| October 19, 2022

Making sense of the often overwhelming world of emerging tech with data-driven tools and resources.