Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Analysis

Building the Tech Coalition

Emelia Probasco
| August 2024

The U.S. Army’s 18th Airborne Corps can now target artillery just as efficiently as the best unit in recent American history—and it can do so with two thousand fewer servicemembers. This report presents a case study of how the 18th Airborne partnered with tech companies to develop, prototype, and operationalize software and artificial intelligence for clear military advantage. The lessons learned form recommendations to the U.S. Department of Defense as it pushes to further develop and adopt AI and other new technologies.

Applications


Assessment


Filter publications
Analysis

Building the Tech Coalition

Emelia Probasco
| August 2024

The U.S. Army’s 18th Airborne Corps can now target artillery just as efficiently as the best unit in recent American history—and it can do so with two thousand fewer servicemembers. This report presents a case study of how the 18th Airborne partnered with tech companies to develop, prototype, and operationalize software and artificial intelligence for clear military advantage. The lessons learned form recommendations to the U.S. Department of Defense as it pushes to further develop and adopt AI and other new technologies.

Extreme ultraviolet (EUV) lithography is the most important technology to have emerged out of the semiconductor industry in recent years. This report presents a case study of its development from the 1980s to the present. Using bibliometric data, this report details the evolution of the research community responsible for its development and the many scientific breakthroughs made on EUVs over a decades-long path to commercialization. The paper concludes with lessons learned for policymakers interested in protecting and promoting the next generation of emerging technologies.

Analysis

Governing AI with Existing Authorities

Jack Corrigan, Owen Daniels, Lauren Kahn, and Danny Hague
| July 2024

A core question in policy debates around artificial intelligence is whether federal agencies can use their existing authorities to govern AI or if the government needs new legal powers to manage the technology. The authors argue that relying on existing authorities is the most effective approach to promoting the safe development and deployment of AI systems, at least in the near term. This report outlines a process for identifying existing legal authorities that could apply to AI and highlights areas where additional legislative or regulatory action may be needed.

Formal Response

Comment on Commerce Department RFI 89 FR 27411

Catherine Aiken, James Dunham, Jacob Feldgoise, Rebecca Gelles, Ronnie Kinoshita, Mina Narayanan, and Christian Schoeberl
| July 16, 2024

CSET submitted the following comment in response to a Request for Information (RFI) from the Department of Commerce regarding 89 FR 27411.

The U.S. government has an opportunity to seize strategic advantages by working with the remote sensing and data analysis industries. Both grew rapidly over the last decade alongside technology improvements, cheaper space launch, new investment-based business models, and stable regulation. From new sensors to new orbits, the intelligence community and regulators have recognized these changes and opportunities—the U.S. Department of Defense, NASA, and other agencies should follow suit.

Analysis

Enabling Principles for AI Governance

Owen Daniels and Dewey Murdick
| July 2024

How to govern artificial intelligence is a concern that is rightfully top of mind for lawmakers and policymakers.To govern AI effectively, regulators must 1) know the terrain of AI risk and harm by tracking incidents and collecting data; 2) develop their own AI literacy and build better public understanding of the benefits and risks; and 3) preserve adaptability and agility by developing policies that can be updated as AI evolves.

Data Brief

A Quantitative Assessment of Department of Defense S&T Publication Collaborations

Emelia Probasco and Autumn Toney
| June 2024

While the effects of the U.S. Department of Defense’s broad investments in research and development go far beyond what is publicly disclosed, authors affiliated with the DOD do publish papers about their research. This analysis examines more than 100,000 papers by DOD-affiliated authors since 2000 and offers insight into the patterns of research publication and collaboration by the DOD.

Analysis

China’s Military AI Roadblocks

Sam Bresnick
| June 2024

China’s leadership believes that artificial intelligence will play a central role in future wars. However, the author's comprehensive review of dozens of Chinese-language journal articles about AI and warfare reveals that Chinese defense experts claim that Beijing is facing several technological challenges that may hinder its ability to capitalize on the advantages provided by military AI. This report outlines these perceived barriers and identifies several technologies that Chinese experts believe may help the country develop and deploy military AI-enabled systems.

Analysis

Trust Issues: Discrepancies in Trustworthy AI Keywords Use in Policy and Research

Emelia Probasco, Kathleen Curlee, and Autumn Toney
| June 2024

Policy and research communities strive to mitigate AI harm while maximizing its benefits. Achieving effective and trustworthy AI necessitates the establishment of a shared language. The analysis of policies across different countries and research literature identifies consensus on six critical concepts: accountability, explainability, fairness, privacy, security, and transparency.

Analysis

Key Concepts in AI Safety: Reliable Uncertainty Quantification in Machine Learning

Tim G. J. Rudner and Helen Toner
| June 2024

This paper is the fifth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. This paper explores the opportunities and challenges of building AI systems that “know what they don’t know.”