Publications

CSET produces evidence-driven analysis in a variety of forms, from informative graphics and translations to expert testimony and published reports. Our key areas of inquiry are the foundations of artificial intelligence — such as talent, data and computational power — as well as how AI can be used in cybersecurity and other national security settings. We also do research on the policy tools that can be used to shape AI’s development and use, and on biotechnology.

Analysis

Which Ties Will Bind?

Sam Bresnick Ngor Luong Kathleen Curlee
| February 2024

U.S. technology companies have become important actors in modern conflicts, and several of them have meaningfully contributed to Ukraine’s defense. But many of these companies are deeply entangled with China, potentially complicating their decision-making in a potential Taiwan contingency.

Applications


Filter publications
Analysis

Which Ties Will Bind?

Sam Bresnick Ngor Luong Kathleen Curlee
| February 2024

U.S. technology companies have become important actors in modern conflicts, and several of them have meaningfully contributed to Ukraine’s defense. But many of these companies are deeply entangled with China, potentially complicating their decision-making in a potential Taiwan contingency.

Analysis

How Persuasive is AI-Generated Propaganda?

Josh A. Goldstein Jason Chao Shelby Grossman Alex Stamos Michael Tomz
| February 2024

Research participants who read propaganda generated by GPT-3 davinci (a large language model) were nearly as persuaded as those who read real propaganda from Iran or Russia, according to a new peer-reviewed study by Josh A. Goldstein and co-authors.

Formal Response

Comment on NIST RFI Related to the Executive Order Concerning Artificial Intelligence (88 FR 88368)

Mina Narayanan Jessica Ji Heather Frase
| February 2, 2024

On February 2, 2024, CSET's Assessment and CyberAI teams submitted a response to NIST's Request for Information related to the Executive Order Concerning Artificial Intelligence (88 FR 88368). In the submission, CSET compiles recommendations from six CSET reports and analyses in order to assist NIST in its implementation of AI Executive Order requirements.

Formal Response

Comment on Advanced Computing Chips Rule

Jacob Feldgoise Hanna Dohmen
| January 17, 2024

On January 17, 2024, CSET Researchers submitted a response to proposed rules from the Bureau of Industry and Security at the U.S. Department of Commerce. In the submission, CSET recommends that Commerce not implement controls on U.S. companies providing IaaS to Chinese entities, among other recommendations.

Analysis

The Core of Federal Cyber Talent

Ali Crawford
| January 2024

Strengthening the federal cyber workforce is one of the main priorities of the National Cyber Workforce and Education Strategy. The National Science Foundation’s CyberCorps Scholarship-for-Service program is a direct cyber talent pipeline into the federal workforce. As the program tries to satisfy increasing needs for cyber talent, it is apparent that some form of program expansion is needed. This policy brief summarizes trends from participating institutions to understand how the program might expand and illustrates a potential future artificial intelligence (AI) federal scholarship-for-service program.

Formal Response

Comment on DHS’s Proposed Rule Modernizing H-1B Requirements

Luke Koslosky
| December 2023

CSET submitted the following comment in response to a DHS Notice on Proposed Rule-Making from the U.S. Citizenship and Immigration Services about modernizing H-1B requirements, providing flexibility in the F-1 program, and program improvements affecting other nonimmigrant workers

Analysis

Scaling AI

Andrew Lohn
| December 2023

While recent progress in artificial intelligence (AI) has relied primarily on increasing the size and scale of the models and computing budgets for training, we ask if those trends will continue. Financial incentives are against scaling, and there can be diminishing returns to further investment. These effects may already be slowing growth among the very largest models. Future progress in AI may rely more on ideas for shrinking models and inventive use of existing models than on simply increasing investment in compute resources.

Analysis

Controlling Large Language Model Outputs: A Primer

Jessica Ji Josh A. Goldstein Andrew Lohn
| December 2023

Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.

CSET submitted the following comment in response to a Request for Comment (RFC) from the Office of Management and Budget (OMB) about a draft memorandum providing guidance to government agencies regarding the appointment of Chief AI Officers, Risk Management for AI, and other processes following the October 30, 2023 Executive Order on AI.

Analysis

Repurposing the Wheel: Lessons for AI Standards

Mina Narayanan Alexandra Seymour Heather Frase Karson Elmgren
| November 2023

Standards enable good governance practices by establishing consistent measurement and norms for interoperability, but creating standards for AI is a challenging task. The Center for Security and Emerging Technology and the Center for a New American Security hosted a series of workshops in the fall of 2022 to examine standards development in the areas of finance, worker safety, cybersecurity, sustainable buildings, and medical devices in order to apply the lessons learned in these domains to AI. This workshop report summarizes our findings and recommendations.