Emergent Abilities in Large Language Models: An Explainer

Thomas Woodside

April 16, 2024

A recent topic of contention among artificial intelligence researchers has been whether large language models can exhibit unpredictable ("emergent") jumps in capability as they are scaled up. These arguments have found their way into policy circles and the popular press, often in simplified or distorted ways that have created confusion. This blog post explores the disagreements around emergence and their practical relevance for policy.

Related Content

Large language models (LLMs), the technology that powers generative artificial intelligence (AI) products like ChatGPT or Google Gemini, are often thought of as chatbots that predict the next word. But that isn't the full story… Read More

The October 30, 2023, White House executive order on artificial intelligence requires companies developing the most advanced AI models to report safety testing results to the federal government. CSET Horizon Junior Fellow Thomas Woodside writes… Read More


Scaling AI

December 2023

While recent progress in artificial intelligence (AI) has relied primarily on increasing the size and scale of the models and computing budgets for training, we ask if those trends will continue. Financial incentives are against… Read More