Reports

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Report

China’s Military AI Wish List

Emelia Probasco, Sam Bresnick, and Cole McFaul
| February 2026

This report examines thousands of Chinese-language open-source requests for proposal (RFPs) published by the People’s Liberation Army between January 1, 2023, and December 31, 2024. The RFPs the authors reviewed offer insights into the PLA’s priorities and ambitions for AI-enabled military technologies associated with C5ISRT: command, control, communications, computers, cyber, intelligence, surveillance, reconnaissance, and targeting.

Applications


Compete


Filter publications
Reports

AI for Military Decision-Making

Emelia Probasco, Helen Toner, Matthew Burtell, and Tim G. J. Rudner
| April 2025

Artificial intelligence is reshaping military decision-making. This concise overview explores how AI-enabled systems can enhance situational awareness and accelerate critical operational decisions—even in high-pressure, dynamic environments. Yet, it also highlights the essential need for clear operational scopes, robust training, and vigilant risk mitigation to counter the inherent challenges of using AI, such as data biases and automation pitfalls. This report offers a balanced framework to help military leaders integrate AI responsibly and effectively.

Reports

Government AI Hire, Use, Buy (HUB) Roundtable Series – Roundtable 4: Capstone

Danny Hague, Natalie Roisman, Matthias Oschinski, and Carolina Pachon
| March 2025

Georgetown University’s Center for Security and Emerging Technology and Beeck Center for Social Impact and Innovation, together with the Georgetown Law Institute for Technology Law and Policy (Tech Institute), led a series of invite-only roundtables over the course of 2024 to grapple with the legal liability questions that artificial intelligence poses, examine AI’s potential to transform government services, and consider how the government can better attract and use AI talent. This resulting report was authored in 2024 after those discussions and is the fourth and final installment of a four-part series.

Reports

Government AI Hire, Use, Buy (HUB) Roundtable Series – Roundtable 3: Government as a Buyer of AI

Carolina Oxenstierna, Aaron Snow, and Danny Hague
| March 2025

Georgetown University’s Center for Security and Emerging Technology and Beeck Center for Social Impact and Innovation, together with the Georgetown Law Institute for Technology Law and Policy (Tech Institute), led a series of invite-only roundtables over the course of 2024 to grapple with the legal liability questions that artificial intelligence poses, examine AI’s potential to transform government services, and consider how the government can better attract and use AI talent. This resulting report was authored in 2024 after those discussions and is the third installment of a four-part series.

Reports

Government AI Hire, Use, Buy (HUB) Roundtable Series – Roundtable 2: Government as an Employer of AI Talent

Danny Hague, Carolina Oxenstierna, and Matthias Oschinski
| March 2025

Georgetown University’s Center for Security and Emerging Technology and Beeck Center for Social Impact and Innovation, together with the Georgetown Law Institute for Technology Law and Policy (Tech Institute), led a series of invite-only roundtables over the course of 2024 to grapple with the legal liability questions that artificial intelligence poses, examine AI’s potential to transform government services, and consider how the government can better attract and use AI talent. This resulting report was authored in 2024 after those discussions and is the second installment of a four-part series.

Reports

Government AI Hire, Use, Buy (HUB) Roundtable Series – Roundtable 1: Government as a User of AI

Carolina Oxenstierna, Alice Cao, and Danny Hague
| March 2025

Georgetown University’s Center for Security and Emerging Technology and Beeck Center for Social Impact and Innovation, together with the Georgetown Law Institute for Technology Law and Policy (Tech Institute), led a series of invite-only roundtables over the course of 2024 to grapple with the legal liability questions that artificial intelligence poses, examine AI’s potential to transform government services, and consider how the government can better attract and use AI talent. This resulting report was authored in 2024 after those discussions and is the first installment of a four-part series.

Reports

Shaping the U.S. Space Launch Market

Michael O’Connor and Kathleen Curlee
| February 2025

The United States leads the world in space launch by nearly every measure: number of launches, total mass to orbit, satellite count, and more. SpaceX’s emergence has provided regular, reliable, and relatively affordable launches to commercial and national security customers. However, today’s market consolidation coupled with the capital requirements necessary to develop rockets may make it difficult for new competitors to break in and keep the space launch market dynamic.

Reports

Staying Current with Emerging Technology Trends: Using Big Data to Inform Planning

Emelia Probasco and Christian Schoeberl
| December 2024

This report proposes an approach to systematically identify promising research using big data and analyze that research’s potential impact through structured engagements with subject-matter experts. The methodology offers a structured way to proactively monitor the research landscape and inform strategic R&D priorities.

Reports

AI Safety and Automation Bias

Lauren Kahn, Emelia Probasco, and Ronnie Kinoshita
| November 2024

Automation bias is a critical issue for artificial intelligence deployment. It can cause otherwise knowledgeable users to make crucial and even obvious errors. Organizational, technical, and educational leaders can mitigate these biases through training, design, and processes. This paper explores automation bias and ways to mitigate it through three case studies: Tesla’s autopilot incidents, aviation incidents at Boeing and Airbus, and Army and Navy air defense incidents.

Reports

Building the Tech Coalition

Emelia Probasco
| August 2024

The U.S. Army’s 18th Airborne Corps can now target artillery just as efficiently as the best unit in recent American history—and it can do so with two thousand fewer servicemembers. This report presents a case study of how the 18th Airborne partnered with tech companies to develop, prototype, and operationalize software and artificial intelligence for clear military advantage. The lessons learned form recommendations to the U.S. Department of Defense as it pushes to further develop and adopt AI and other new technologies.

The U.S. government has an opportunity to seize strategic advantages by working with the remote sensing and data analysis industries. Both grew rapidly over the last decade alongside technology improvements, cheaper space launch, new investment-based business models, and stable regulation. From new sensors to new orbits, the intelligence community and regulators have recognized these changes and opportunities—the U.S. Department of Defense, NASA, and other agencies should follow suit.