Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Assessment


Peer Watch


Filter entries

Scoping AI for National Security: An Impossible Task?

Emily S. Weinstein and Ngor Luong
| August 28, 2023

On August 9, 2023 the Biden administration announced an executive order to restrict certain U.S. investments in China’s key technology sectors, including artificial intelligence. This blog post proposes implementing investment restrictions related to AI for national security using a list-based end-user approach that builds upon existing list-based tools.

The much-anticipated National Cyber Workforce and Education Strategy (NCWES) provides a comprehensive set of strategic objectives for training and producing more cyber talent by prioritizing and encouraging the development of more localized cyber ecosystems that serve the needs of a variety of communities rather than trying to prescribe a blanket policy. This is a much-needed and reinvigorated approach that understands the unavoidable inequities in both cyber education and workforce development, but provides strategies for mitigating them. In this blog post, we highlight key elements that could be easily overlooked.

In & Out of China: Financial Support for AI Development

Ngor Luong and Margarita Konaev
| August 10, 2023

Drawing from prior CSET research, this blog post describes different domestic and international initiatives the Chinese government and companies are pursuing to shore up investment in AI and meet China’s strategic objectives, as well as indicators to track their future trajectories.

Large language models (LLMs) could potentially be used by malicious actors to generate disinformation at scale. But how likely is this risk, and what types of economic incentives do propagandists actually face to turn to LLMs? New analysis uploaded to arXiv and summarized here suggests that it is all but certain that a well-run human-machine team that utilized existing LLMs (even open-source ones that are not cutting edge) would save a propagandist money on content generation relative to a human-only operation.

On July 21, the White House announced voluntary commitments from seven AI firms to ensure safe, secure, and transparent AI. CSET’s research provides important context to this discussion.

Securing AI Makes for Safer AI

John Bansemer and Andrew Lohn
| July 6, 2023

Recent discussions of AI have focused on safety, reliability, and other risks. Lost in this debate is the real need to secure AI against malicious actors. This blog post applies lessons from traditional cybersecurity to emerging AI-model risks.

For Export Controls on AI, Don’t Forget the “Catch-All” Basics

Emily S. Weinstein and Kevin Wolf
| July 5, 2023

Existing U.S. government tools and approaches may help mitigate some of the issues worrying AI observers. This blog post describes long-standing “catch-all” controls, administered by the Department of Commerce’s Bureau of Industry and Security (BIS), and how they might be used to address some of these threats.

Controlling Access to Compute via the Cloud: Options for U.S. Policymakers, Part II

Hanna Dohmen, Jacob Feldgoise, Emily S. Weinstein, and Timothy Fist
| June 5, 2023

In the second of a series of publications, CSET and CNAS outline one avenue the U.S. government could pursue to cut off China’s access to cloud computing services in support of military, security, or intelligence services end use(r)s. The authors discuss pros, cons, and limitations.

Controlling Access to Advanced Compute via the Cloud: Options for U.S. Policymakers, Part I

Hanna Dohmen, Jacob Feldgoise, Emily S. Weinstein, and Timothy Fist
| May 15, 2023

In the first of a series of publications, CSET and CNAS outline one potential avenue for the U.S. government to cut off Chinese access to controlled chips via cloud computing, as well its pros, cons, and limitations.

Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk

Josh A. Goldstein, Girish Sastry, Micah Musser, Renée DiResta, Matthew Gentzel, and Katerina Sedova
| January 2023

Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI, and the Stanford Internet Observatory explore how language models could be misused for influence operations in the future, and they provide a framework for assessing potential mitigation strategies.