The National Artificial Intelligence Research Resource (NAIRR) pilot provides federal infrastructure, including computational resources, to U.S. AI researchers. This blog post estimates the compute provided through the pilot’s initial six resources. We find that the total compute capacity of the initial resources is roughly 3.77 exaFLOPS, the equivalent of approximately 5,000 H100 GPUs (using the tensor cores optimal for AI). Factoring in the amount of time these resources are available for use, we find that the overall compute allocated is roughly 3.26 yottaFLOPs. The pilot is a significant first step in providing compute to under-resourced organizations, although it is a fraction of what is available to industry.
While recent progress in artificial intelligence (AI) has relied primarily on increasing the size and scale of the models and computing budgets for training, we ask if those trends will continue. Financial incentives are against scaling, and there can be diminishing returns to further investment. These effects may already be slowing growth among the very largest models. Future progress in AI may rely more on ideas for shrinking models and inventive use of existing models than on simply increasing investment in compute resources.
This explainer overviews techniques to produce smaller and more efficient language models that require fewer resources to develop and operate. Importantly, information on how to leverage these techniques, and many of the subsequent small models, are openly available online for anyone to use. The combination of both small (i.e., easy to use) and open (i.e., easy to access) could have significant implications for artificial intelligence development.
Artificial intelligence that makes news headlines, such as ChatGPT, typically runs in well-maintained data centers with an abundant supply of compute and power. However, these resources are more limited on many systems in the real world, such as drones, satellites, or ground vehicles. As a result, the AI that can run onboard these devices will often be inferior to state of the art models. That can affect their usability and the need for additional safeguards in high-risk contexts. This issue brief contextualizes these challenges and provides policymakers with recommendations on how to engage with these technologies.
In an op-ed published in The Diplomat, Micah Musser discusses the concerns raised by policymakers in Washington about the disruptive potential of artificial intelligence technologies.
CSET Research Analyst Micah Musser and Institute for Progress Fellow Tim Hwang discussed CSET research examining factors that will contribute to future AI development.
A CSET data brief by Micah Musser, Rebecca Gelles, Ronnie Kinoshita, Catherine Aiken, and Andrew Lohn was cited by Politico in a newsletter about the rapidly changing field of artificial intelligence and the debate surrounding its regulation.
Micah Musser, Rebecca Gelles, Ronnie Kinoshita, Catherine Aiken, and Andrew Lohn
| April 2023
Progress in artificial intelligence (AI) depends on talented researchers, well-designed algorithms, quality datasets, and powerful hardware. The relative importance of these factors is often debated, with many recent “notable” models requiring massive expenditures of advanced hardware. But how important is computational power for AI progress in general? This data brief explores the results of a survey of more than 400 AI researchers to evaluate the importance and distribution of computational needs.
Between 2012 and 2018, the amount of computing power used by record-breaking artificial intelligence models doubled every 3.4 months. Even with money pouring into the AI field, this trendline is unsustainable. Because of cost, hardware availability and engineering difficulties, the next decade of AI can't rely exclusively on applying more and more computing power to drive further progress.
CSET submitted this comment to the Office of Science and Technology Policy and the National Science Foundation to support the work of the National Artificial Intelligence Research Resource (NAIRR) Task Force to develop an implementation roadmap that would provide AI researchers and students across scientific disciplines access to computational resources, high-quality data, educational tools, and user support.
This website uses cookies.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.