Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Assessment


Peer Watch


Filter entries

The NAIRR Pilot: Estimating Compute

Kyle Miller and Rebecca Gelles
| May 8, 2024

The National Artificial Intelligence Research Resource (NAIRR) pilot provides federal infrastructure, including computational resources, to U.S. AI researchers. This blog post estimates the compute provided through the pilot’s initial six resources. We find that the total compute capacity of the initial resources is roughly 3.77 exaFLOPS, the equivalent of approximately 5,000 H100 GPUs (using the tensor cores optimal for AI). Factoring in the amount of time these resources are available for use, we find that the overall compute allocated is roughly 3.26 yottaFLOPs. The pilot is a significant first step in providing compute to under-resourced organizations, although it is a fraction of what is available to industry.

Riding the AI Wave: What’s Happening in K-12 Education?

Ali Crawford and Cherry Wu
| April 2, 2024

Over the past year, artificial intelligence has quickly become a focal point in K-12 education. This blog post describes new and existing K-12 AI education efforts so that U.S. policymakers and other decision-makers may better understand what’s happening in practice.

This blog post assesses how different priorities can change the risk-benefit calculus of open foundation models, and provides divergent answers to the question of “given current AI capabilities, what might happen if the U.S. government left the open AI ecosystem unregulated?” By answering this question from different perspectives, this blog post highlights the dangers of hastily subscribing to any particular course of action without weighing the potentially beneficial, risky, and ambiguous implications of open models.

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the third blog post in a three-part series explaining some key elements of how LLMs function. This blog post explains how AI developers are finding ways to use LLMs for much more than just generating text.

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the second blog post in a three-part series explaining some key elements of how LLMs function. This blog post explores fine-tuning—a set of techniques used to change the types of output that pre-trained models produce.

CSET’s Must Read Research: A Primer

Tessa Baker
| December 18, 2023

This guide provides a run-down of CSET’s research since 2019 for first-time visitors and long-term fans alike. Quickly get up to speed on our “must-read” research and learn about how we organize our work.

There’s a lot to digest in the October 30 White House’s AI Executive Order. Our tracker is a useful starting point to identify key provisions and monitor the government’s progress against specific milestones, but grappling with the substance is an entirely different matter. This blog post, focusing on Section 4 of the EO (“Developing Guidelines, Standards, and Best Practices for AI Safety and Security”), is the first in a series that summarizes interesting provisions, shares some of our initial reactions, and highlights some of CSET’s research that may help the USG tackle the EO.

The Executive Order on Safe, Secure, and Trustworthy AI: Decoding Biden’s AI Policy Roadmap

Ronnie Kinoshita, Luke Koslosky, and Tessa Baker
| May 3, 2024

On October 30, 2023, the Biden administration released its long-awaited Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. CSET has broken down the EO, focusing on specific government deliverables. Our EO Provision and Timeline tracker lists which agencies are responsible for actioning EO provisions and their deadlines.

This blog post by CSET’s Executive Director Dewey Murdick explores two different metaphorical lenses for governing the frontier of AI. The "Space Exploration Approach" likens AI models to spacecrafts venturing into unexplored territories, requiring detailed planning and regular updates. The "Snake-Filled Garden Approach" views AI as a garden with both harmless and dangerous 'snakes,' necessitating rigorous testing and risk assessment. In the post, Dewey examines these metaphors and the different ways they can inform approaches to AI governance strategy that balances innovation with safety, all while emphasizing the importance of ongoing learning and adaptability.

What Does AI Red-Teaming Actually Mean?

Jessica Ji
| October 24, 2023

“AI red-teaming” is currently a hot topic, but what does it actually mean? This blog post explains the term’s cybersecurity origins, why AI red-teaming should incorporate cybersecurity practices, and how its evolving definition and sometimes inconsistent usage can be misleading for policymakers interested in exploring testing requirements for AI systems.