Blog

Delve into insightful blog posts from CSET experts exploring the nexus of technology and policy. Navigate through in-depth analyses, expert op-eds, and thought-provoking discussions on inclusion and diversity within the realm of technology.

The European Union's Artificial Intelligence Act has officially come into force today after more than five years of legislative processes and negotiations. While marking a significant milestone, it also initiates a prolonged phase of implementation, refinement, and enforcement. This blog post outlines key aspects of the regulation, such as rules for general-purpose AI and governance structures, and provides insights into its timeline and future expectations.

Assessment


Peer Watch


Filter entries

The NAIRR Pilot: Estimating Compute

Kyle Miller and Rebecca Gelles
| May 8, 2024

The National Artificial Intelligence Research Resource (NAIRR) pilot provides federal infrastructure, including computational resources, to U.S. AI researchers. This blog post estimates the compute provided through the pilot’s initial six resources. We find that the total compute capacity of the initial resources is roughly 3.77 exaFLOPS, the equivalent of approximately 5,000 H100 GPUs (using the tensor cores optimal for AI). Factoring in the amount of time these resources are available for use, we find that the overall compute allocated is roughly 3.26 yottaFLOPs. The pilot is a significant first step in providing compute to under-resourced organizations, although it is a fraction of what is available to industry.

A recent topic of contention among artificial intelligence researchers has been whether large language models can exhibit unpredictable ("emergent") jumps in capability as they are scaled up. These arguments have found their way into policy circles and the popular press, often in simplified or distorted ways that have created confusion. This blog post explores the disagreements around emergence and their practical relevance for policy.

Riding the AI Wave: What’s Happening in K-12 Education?

Ali Crawford and Cherry Wu
| April 2, 2024

Over the past year, artificial intelligence has quickly become a focal point in K-12 education. This blog post describes new and existing K-12 AI education efforts so that U.S. policymakers and other decision-makers may better understand what’s happening in practice.

The Carnegie Classification of Institutions of Higher Education is making changes to drastically simplify the criteria that determine its highly coveted R1 top-tier research classification. Last year, CSET Senior Fellow, Jaret Riddick, wrote about a new law from Congress, Section 223 of the 2023 National Defense Authorization Act, intended to leverage existing Carnegie classification criteria to increase defense research capacity for historically Black colleges and universities. Now, research is needed to understand how the changes proposed for 2025 classification criteria impact U.S. Department of Defense goals for eligible HBCU partners.

This blog post assesses how different priorities can change the risk-benefit calculus of open foundation models, and provides divergent answers to the question of “given current AI capabilities, what might happen if the U.S. government left the open AI ecosystem unregulated?” By answering this question from different perspectives, this blog post highlights the dangers of hastily subscribing to any particular course of action without weighing the potentially beneficial, risky, and ambiguous implications of open models.

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the third blog post in a three-part series explaining some key elements of how LLMs function. This blog post explains how AI developers are finding ways to use LLMs for much more than just generating text.

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the second blog post in a three-part series explaining some key elements of how LLMs function. This blog post explores fine-tuning—a set of techniques used to change the types of output that pre-trained models produce.

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story of what LLMs are and how they work. This is the first blog post in a three-part series explaining some key elements of how LLMs function. This blog post covers pre-training—the process by which LLMs learn to predict the next word—and why it’s so surprisingly powerful.

The October 30, 2023, White House executive order on artificial intelligence requires companies developing the most advanced AI models to report safety testing results to the federal government. CSET Horizon Junior Fellow Thomas Woodside writes that these requirements are a good first step towards managing uncertain risks and Congress should consider codifying them into law.

China’s Hybrid Economy: What to Do about BGI?

Anna Puglisi
| February 2, 2024

As the U.S. government considers banning genomics companies from China in the Biosecure Act, it opens a broader question of how the U.S. and other market economies should deal with China’s national champions. This blog post provides an overview of BGI and how China’s industrial policy impacts technology development.