Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Assessment


Peer Watch


Filter entries

The U.S. AI Action Plan is built on three familiar pillars—accelerating innovation, expanding infrastructure, and maintaining technological leadership—but its real test depends on education and training. To that end, the Trump Administration has linked the plan to two executive orders issued in April 2025: Executive Order 14277, “Advancing Artificial Intelligence Education for American Youth,” and Executive Order 14278, “Preparing Americans for High-Paying Skilled Trade Jobs of the Future.” Both orders came with tight deadlines and those windows have now closed. So where do things stand?

China’s Artificial General Intelligence

William Hannas and Huey-Meei Chang
| August 29, 2025

Recent op-eds comparing the United States’ and China’s artificial intelligence (AI) programs fault the former for its focus on artificial general intelligence (AGI) while praising China for its success in applying AI throughout the whole of society. These op-eds overlook an important point: although China is outpacing the United States in diffusing AI across its society, China has by no means de-emphasized its state-sponsored pursuit of AGI.

To protect Europeans from the risks posed by artificial intelligence, the EU passed its AI Act last year. This month, the EU released a Code of Practice to help providers of general purpose AI comply with the AI Act. This blog reviews the measures set out in the new Code’s safety and security chapter, assesses how they compare to existing practices, and what the Code’s global impact might be.

An Analysis of China’s AI Governance Proposals

Hipolito Calero
| September 12, 2024

This blog post analyzes five major Chinese AI governance proposals, focusing on the key actors specified in each proposal. We find that older proposals lack specificity when identifying AI governance actors. Recent proposals, on the other hand, assign roles and responsibilities to a defined set of actors. The findings from this blog post can help policymakers and analysts better understand China’s fast-evolving AI governance landscape.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Riding the AI Wave: What’s Happening in K-12 Education?

Ali Crawford and Cherry Wu
| April 2, 2024

Over the past year, artificial intelligence has quickly become a focal point in K-12 education. This blog post describes new and existing K-12 AI education efforts so that U.S. policymakers and other decision-makers may better understand what’s happening in practice.

The Carnegie Classification of Institutions of Higher Education is making changes to drastically simplify the criteria that determine its highly coveted R1 top-tier research classification. Last year, CSET Senior Fellow, Jaret Riddick, wrote about a new law from Congress, Section 223 of the 2023 National Defense Authorization Act, intended to leverage existing Carnegie classification criteria to increase defense research capacity for historically Black colleges and universities. Now, research is needed to understand how the changes proposed for 2025 classification criteria impact U.S. Department of Defense goals for eligible HBCU partners.

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the third blog post in a three-part series explaining some key elements of how LLMs function. This blog post explains how AI developers are finding ways to use LLMs for much more than just generating text.

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the second blog post in a three-part series explaining some key elements of how LLMs function. This blog post explores fine-tuning—a set of techniques used to change the types of output that pre-trained models produce.

RISC-V: What it is and Why it Matters

Jacob Feldgoise
| January 22, 2024

As the U.S. government tightens its controls on China’s semiconductor ecosystem, a new dimension is increasingly worrying Congress: the open-source chip architecture known as RISC-V (pronounced “risk-five”). This blog post provides an introduction to the RISC-V architecture and an explanation of what policy-makers can do to address concerns about this open architecture.