Policymakers and the general public increasingly recognize the growing importance of artificial intelligence. Governments, industry and others are considering the most effective ways to support U.S. AI leadership and limit progress by other countries, particularly competitors. The predominant narrative is that computational power (or “compute”) will serve as the key bottleneck on future progress in AI, and that it therefore provides a convenient lever for policymakers to influence.
But how important is compute for AI progress, relative to other factors like talent or data, and how useful is it as a policy lever? A growing body of CSET research—including a recently published survey of more than 400 AI researchers, academic and private-sector alike—suggests that compute isn’t the all-purpose lever that many policymakers wish it were. The growth in computing needs for the largest AI models is slowing down; talent is likely a bigger constraint facing the majority of AI researchers; and many applications of AI must operate in constrained environments that aren’t affected by compute-focused interventions.
On May 25, CSET Research Analyst Micah Musser provided an overview of this body of CSET research and its implications for policymaking—including recent export controls on high-end computing technology and current proposals for a National AI Research Resource. This discussion was followed by an audience Q&A session moderated by Tim Hwang, Senior Technology Fellow at the Institute for Progress. See the recording below to learn more about these latest insights.
Recording and Discussion
Micah Musser is a Research Analyst at Georgetown’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. His latest reports include The Main Resource is the Human, Adversarial Machine Learning and Cybersecurity, and Forecasting Potential Misuses for Disinformation Campaigns–and How to Reduce Risk. Previously, he worked as a Research Assistant at the Berkley Center for Religion, Peace, and World Affairs. He graduated from Georgetown University’s College of Arts and Sciences with a B.A. (summa cum laude) in Government focusing on political theory.
Tim Hwang is a Senior Technology Fellow at the Institute for Progress. His research focuses on metascience, comparative studies in emerging technologies, and grand strategy in science and technology policy. He previously served as a Research Fellow at Georgetown’s Center for Security and Emerging Technology (CSET). He is the former Director of the Harvard-MIT Ethics and Governance of AI Initiative, a philanthropic project working to ensure that machine learning and autonomous technologies are researched, developed, and deployed in the public interest. Previously, he was at Google, where he was the company’s global public policy lead on artificial intelligence, leading outreach to government and civil society on issues surrounding the social impact of the technology. He holds a J.D. from Berkeley Law School and a B.A. from Harvard College.