Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Assessment


Peer Watch


Filter entries

The Executive Order on Safe, Secure, and Trustworthy AI: Decoding Biden’s AI Policy Roadmap

Ronnie Kinoshita, Luke Koslosky, and Tessa Baker
| May 3, 2024

On October 30, 2023, the Biden administration released its long-awaited Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. CSET has broken down the EO, focusing on specific government deliverables. Our EO Provision and Timeline tracker lists which agencies are responsible for actioning EO provisions and their deadlines.

What Does AI Red-Teaming Actually Mean?

Jessica Ji
| October 24, 2023

“AI red-teaming” is currently a hot topic, but what does it actually mean? This blog post explains the term’s cybersecurity origins, why AI red-teaming should incorporate cybersecurity practices, and how its evolving definition and sometimes inconsistent usage can be misleading for policymakers interested in exploring testing requirements for AI systems.

A Guide to the Proposed Outbound Investment Regulations

Ngor Luong and Emily S. Weinstein
| October 6, 2023

The August 9 Executive Order aims to restrict certain U.S. investments in key technology areas. In a previous post, we proposed an end-user approach to crafting an AI investment prohibition. In this follow-on post, we rely on existing and hypothetical transactions to test scenarios where U.S. investments in China’s AI ecosystem would or would not be covered under the proposed program, and highlight outstanding challenges.

The EU AI Act: A Primer

Mia Hoffmann
| September 26, 2023

The EU AI Act is nearing formal adoption and implementation. Read this blog post, with updated analysis following the December 2023 political agreement, by CSET’s resident EU expert and Research Fellow, Mia Hoffmann. Learn what we know about the Act and what it means for AI regulation in the EU (and the world).

Memory Safety: An Explainer

Chris Rohlf
| September 26, 2023

Memory safety issues remain endemic in cybersecurity and are often seen as a never-ending source of cyber vulnerabilities. Recently the topic has increased in prominence with the White House Office of the National Cyber Director (ONCD) releasing a request for comments on how to strengthen the open-source ecosystem. But what exactly is memory safety? This blog describes the historical antecedents in computing that helped create one aspect of today’s insecure cyber ecosystem. There will be no quick fixes, but there is encouraging progress towards addressing these long-standing security issues.

Replicator: A Bold New Path for DoD

Michael O’Connor
| September 18, 2023

The Replicator effort by the U.S. Department of Defense (DoD) is intended to overcome some of the military challenges posed by China’s People’s Liberation Army (PLA). This blog post intends to both identify tradeoffs for the Department to consider as it charts the path for Replicator, and to provide a sense for the state of industry readiness to support.

Universities can build more inclusive computer science programs by addressing the reasons that students may be deterred from pursuing the field. This blog post explores some of those reasons, features of CS education that cause them, and provides recommendations on how to design learning experiences that are safer and more exploratory for everyone.

The much-anticipated National Cyber Workforce and Education Strategy (NCWES) provides a comprehensive set of strategic objectives for training and producing more cyber talent by prioritizing and encouraging the development of more localized cyber ecosystems that serve the needs of a variety of communities rather than trying to prescribe a blanket policy. This is a much-needed and reinvigorated approach that understands the unavoidable inequities in both cyber education and workforce development, but provides strategies for mitigating them. In this blog post, we highlight key elements that could be easily overlooked.

In & Out of China: Financial Support for AI Development

Ngor Luong and Margarita Konaev
| August 10, 2023

Drawing from prior CSET research, this blog post describes different domestic and international initiatives the Chinese government and companies are pursuing to shore up investment in AI and meet China’s strategic objectives, as well as indicators to track their future trajectories.

Large language models (LLMs) could potentially be used by malicious actors to generate disinformation at scale. But how likely is this risk, and what types of economic incentives do propagandists actually face to turn to LLMs? New analysis uploaded to arXiv and summarized here suggests that it is all but certain that a well-run human-machine team that utilized existing LLMs (even open-source ones that are not cutting edge) would save a propagandist money on content generation relative to a human-only operation.