Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Assessment


Peer Watch


Filter entries

To protect Europeans from the risks posed by artificial intelligence, the EU passed its AI Act last year. This month, the EU released a Code of Practice to help providers of general purpose AI comply with the AI Act. This blog reviews the measures set out in the new Code’s safety and security chapter, assesses how they compare to existing practices, and what the Code’s global impact might be.

An Analysis of China’s AI Governance Proposals

Hipolito Calero
| September 12, 2024

This blog post analyzes five major Chinese AI governance proposals, focusing on the key actors specified in each proposal. We find that older proposals lack specificity when identifying AI governance actors. Recent proposals, on the other hand, assign roles and responsibilities to a defined set of actors. The findings from this blog post can help policymakers and analysts better understand China’s fast-evolving AI governance landscape.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the third blog post in a three-part series explaining some key elements of how LLMs function. This blog post explains how AI developers are finding ways to use LLMs for much more than just generating text.

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the second blog post in a three-part series explaining some key elements of how LLMs function. This blog post explores fine-tuning—a set of techniques used to change the types of output that pre-trained models produce.

RISC-V: What it is and Why it Matters

Jacob Feldgoise
| January 22, 2024

As the U.S. government tightens its controls on China’s semiconductor ecosystem, a new dimension is increasingly worrying Congress: the open-source chip architecture known as RISC-V (pronounced “risk-five”). This blog post provides an introduction to the RISC-V architecture and an explanation of what policy-makers can do to address concerns about this open architecture.

CSET’s Must Read Research: A Primer

Tessa Baker
| December 18, 2023

This guide provides a run-down of CSET’s research since 2019 for first-time visitors and long-term fans alike. Quickly get up to speed on our “must-read” research and learn about how we organize our work.

This blog post by CSET’s Executive Director Dewey Murdick explores two different metaphorical lenses for governing the frontier of AI. The "Space Exploration Approach" likens AI models to spacecrafts venturing into unexplored territories, requiring detailed planning and regular updates. The "Snake-Filled Garden Approach" views AI as a garden with both harmless and dangerous 'snakes,' necessitating rigorous testing and risk assessment. In the post, Dewey examines these metaphors and the different ways they can inform approaches to AI governance strategy that balances innovation with safety, all while emphasizing the importance of ongoing learning and adaptability.

A Guide to the Proposed Outbound Investment Regulations

Ngor Luong and Emily S. Weinstein
| October 6, 2023

The August 9 Executive Order aims to restrict certain U.S. investments in key technology areas. In a previous post, we proposed an end-user approach to crafting an AI investment prohibition. In this follow-on post, we rely on existing and hypothetical transactions to test scenarios where U.S. investments in China’s AI ecosystem would or would not be covered under the proposed program, and highlight outstanding challenges.

The EU AI Act: A Primer

Mia Hoffmann
| September 26, 2023

The EU AI Act is nearing formal adoption and implementation. Read this blog post, with updated analysis following the December 2023 political agreement, by CSET’s resident EU expert and Research Fellow, Mia Hoffmann. Learn what we know about the Act and what it means for AI regulation in the EU (and the world).