Tag Archive: Compute

Scaling AI

Andrew Lohn
| December 2023

While recent progress in artificial intelligence (AI) has relied primarily on increasing the size and scale of the models and computing budgets for training, we ask if those trends will continue. Financial incentives are against scaling, and there can be diminishing returns to further investment. These effects may already be slowing growth among the very largest models. Future progress in AI may rely more on ideas for shrinking models and inventive use of existing models than on simply increasing investment in compute resources.

For Export Controls on AI, Don’t Forget the “Catch-All” Basics

Emily S. Weinstein Kevin Wolf
| July 5, 2023

Existing U.S. government tools and approaches may help mitigate some of the issues worrying AI observers. This blog post describes long-standing “catch-all” controls, administered by the Department of Commerce’s Bureau of Industry and Security (BIS), and how they might be used to address some of these threats.

In an op-ed published in The Diplomat, Micah Musser discusses the concerns raised by policymakers in Washington about the disruptive potential of artificial intelligence technologies.

“The Main Resource is the Human”

Micah Musser Rebecca Gelles Ronnie Kinoshita Catherine Aiken Andrew Lohn
| April 2023

Progress in artificial intelligence (AI) depends on talented researchers, well-designed algorithms, quality datasets, and powerful hardware. The relative importance of these factors is often debated, with many recent “notable” models requiring massive expenditures of advanced hardware. But how important is computational power for AI progress in general? This data brief explores the results of a survey of more than 400 AI researchers to evaluate the importance and distribution of computational needs.

Compute Accounting Principles Can Help Reduce AI Risks

Tech Policy Press
| November 30, 2022

In an opinion piece for Tech Policy Press, CSET's Krystal Jackson, Karson Elmgren, Jacob Feldgoise, and their coauthor Andrew Critch wrote about computational power as a key factor driving AI progress.

AI and Compute

Andrew Lohn Micah Musser
| January 2022

Between 2012 and 2018, the amount of computing power used by record-breaking artificial intelligence models doubled every 3.4 months. Even with money pouring into the AI field, this trendline is unsustainable. Because of cost, hardware availability and engineering difficulties, the next decade of AI can't rely exclusively on applying more and more computing power to drive further progress.

Devices based on superconductor electronics can achieve much higher energy efficiency than standard electronics. Research in superconductor electronics could advance a range of commercial and defense priorities, with potential applications for supercomputing, artificial intelligence, sensors, signal processing, and quantum computing. This brief identifies the countries most actively contributing to superconductor electronics research and assesses their relative competitiveness in terms of both research output and funding.

National Power After AI

Matthew Daniels Ben Chang
| July 2021

AI technologies will likely alter great power competitions in foundational ways, changing both how nations create power and their motives for wielding it against one another. This paper is a first step toward thinking more expansively about AI & national power and seeking pragmatic insights for long-term U.S. competition with authoritarian governments.

One sentence summarizes the complexities of modern artificial intelligence: Machine learning systems use computing power to execute algorithms that learn from data. This AI triad of computing power, algorithms, and data offers a framework for decision-making in national security policy.

Plus, New Zealand’s Algorithm Charter, the House Intel Authorization Act and ODNI’s AI principles.