Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Assessment


Peer Watch


Filter entries

Recent announcements from both Pentagon and Congressional leaders offer significant opportunity for rapidly delivering autonomous systems technology at-scale for U.S. Warfighters well into the future. Dr. Jaret Riddick, CSET Senior Fellow and former Principal Director for Autonomy in USD(R&E) offers his perspective on DOD’s Replicator Initiative and recent legislative proposals about DOD autonomy.

Universities can build more inclusive computer science programs by addressing the reasons that students may be deterred from pursuing the field. This blog post explores some of those reasons, features of CS education that cause them, and provides recommendations on how to design learning experiences that are safer and more exploratory for everyone.

Scoping AI for National Security: An Impossible Task?

Emily S. Weinstein and Ngor Luong
| August 28, 2023

On August 9, 2023 the Biden administration announced an executive order to restrict certain U.S. investments in China’s key technology sectors, including artificial intelligence. This blog post proposes implementing investment restrictions related to AI for national security using a list-based end-user approach that builds upon existing list-based tools.

A Summer Packed with AI Activity

Tessa Baker
| August 25, 2023

The Summer of 2023 will likely go down in the history books as the “Summer of AI”—with numerous advancements in LLMs and other generative AIs capturing the public’s consciousness. At the same time, the Biden administration announced multiple efforts to ensure the American public’s safety, security, and privacy, while ensuring the U.S. continues to lead the world in AI innovation. This blog post summarizes some of these major executive branch actions and highlights related CSET insights.

Understanding AI Harms: An Overview

Heather Frase and Owen Daniels
| August 11, 2023

As policymakers decide how best to regulate AI, they first need to grasp the different types of harm that various AI applications might cause at the individual, national, and even societal levels. To better understand AI harm, this blog post presents some key components and characteristics.

The much-anticipated National Cyber Workforce and Education Strategy (NCWES) provides a comprehensive set of strategic objectives for training and producing more cyber talent by prioritizing and encouraging the development of more localized cyber ecosystems that serve the needs of a variety of communities rather than trying to prescribe a blanket policy. This is a much-needed and reinvigorated approach that understands the unavoidable inequities in both cyber education and workforce development, but provides strategies for mitigating them. In this blog post, we highlight key elements that could be easily overlooked.

In & Out of China: Financial Support for AI Development

Ngor Luong and Margarita Konaev
| August 10, 2023

Drawing from prior CSET research, this blog post describes different domestic and international initiatives the Chinese government and companies are pursuing to shore up investment in AI and meet China’s strategic objectives, as well as indicators to track their future trajectories.

Large language models (LLMs) could potentially be used by malicious actors to generate disinformation at scale. But how likely is this risk, and what types of economic incentives do propagandists actually face to turn to LLMs? New analysis uploaded to arXiv and summarized here suggests that it is all but certain that a well-run human-machine team that utilized existing LLMs (even open-source ones that are not cutting edge) would save a propagandist money on content generation relative to a human-only operation.

Why Improving AI Reliability Metrics May Not Lead to Reliability

Romeo Valentin and Helen Toner
| August 8, 2023

How can we measure the reliability of machine learning systems? And do these measures really help us predict real world performance? A recent study by the Stanford Intelligent Systems Laboratory, supported by CSET funding, provides new evidence that models may perform well on certain reliability metrics while still being unreliable in other ways. This blog post summarizes the study’s results, which suggest that policymakers and regulators should not think of “reliability” or “robustness” as a single, easy-to-measure property of an AI system. Instead, AI reliability requirements will need to consider which facets of reliability matter most for any given use case, and how those facets can be evaluated.

Large Language Models (LLMs): An Explainer

James Dunham
| August 1, 2023

CSET has received a lot of questions about LLMs and their implications. But questions and discussions tend to miss some basics about LLMs and how they work. In this blog post, we ask CSET’s NLP Engineer, James Dunham, to help us explain LLMs in plain English.