The question of how to regulate so-called “frontier AI” systems has taken center stage in recent months, driven by a flurry of releases of powerful models with wide-ranging capabilities and growing public concern about the ways systems using these models could cause harm. Researchers and engineers working on this technology (including Turing Award winners Geoffrey Hinton and Yoshua Bengio) have raised concerns that systems not far beyond today’s frontier could be misused to enable cyber, chemical, or biological attacks, or could even autonomously replicate and cause harm. While there is a great deal of expert disagreement on when—or even whether—these risks will come to pass, it would seem unwise for governments to dismiss concerns of this kind.
Proposals for how to regulate frontier AI have started to emerge, including a multiauthored white paper, policy agendas from industry players such as Google and Microsoft, and legislative proposals from lawmakers and civil society organizations alike. While each of these proposals has been a valuable contribution to the public discourse, none of them is yet a “shovel-ready” regulatory proposal, and some of the suggested policies raise more questions than answers. Designing effective regulations to manage risks posed by AI’s advancing frontier will require deep technical and legal expertise as well as close attention to second-order effects, including the risk of regulatory capture by powerful incumbent companies.
In a series of workshops held over the summer, CSET and the Center for a New American Security (CNAS) sought to tackle some key questions critical for designing such regulations. Our participants were drawn from academia, civil society, and industry, and covered a range of perspectives, including voices both supportive and skeptical of frontier AI regulation. We also enlisted staff from Microsoft and OpenAI as technical contributors.
We sought to answer two questions:
- What would an appropriate regulatory definition for “frontier AI” look like?
- What regulatory requirements could be placed on frontier AI developers, both in the near term and longer term if AI capabilities continue to grow?
This blog post summarizes some of the key takeaways that we (authors Helen Toner and Timothy Fist) drew from these workshops. It is not intended to serve as a full readout of the conversations or to represent the views of other participants.
Key takeaways:
- Landing on a satisfactory definition for frontier AI is challenging.
- Compute thresholds could play a useful role in regulatory definitions, but are unlikely to work as a complete solution.
- Regulatory interventions to improve the U.S. government’s visibility into the AI frontier are a promising and shovel-ready place to start.
- Requirements around security, assessments, and risk management could be valuable, but are not yet as ripe for implementation.
- Regulations on supporting computing infrastructure could be a useful complement to frontier AI regulation, but should be carefully scoped to avoid bad incentives.
- Frontier AI regulation would tackle only a small fraction of the policy challenges posed by AI, and should not be seen as a complete solution.
Landing on a satisfactory definition for frontier AI is challenging.
Any plan to regulate frontier AI systems would require a clear definition of which specific technologies it would apply to. The large, cutting-edge systems usually referred to as frontier AI represent only a small fraction of the huge variety of AI systems in use, so these regulations would need to target that specific slice. But there are both conceptual and practical difficulties in doing so, as follows.
Firstly, it’s important to distinguish between two different categories of systems that might be meant by frontier AI:
- Models that are at or beyond the current cutting edge, which “expand the frontier” or “push into the unknown,” in that they are pushing the limits of what AI can do.
- Any model that is considered to be above a certain threshold of riskiness.
Definition (1), which would move over time as the field advances, is a more intuitive use of the word “frontier.” Using this definition, one might distinguish between a “thin” frontier—i.e., models with literally unprecedented capabilities or scale—and a “thicker” frontier that would move over time but still include some systems behind the absolute bleeding edge.
Definition (2) does not match the commonsense usage of “frontier,” since over time it would come to include models that are far behind the state of the art. But the underlying motivation to regulate frontier AI in the first place is the possibility that such systems could pose severe risks, so it is appealing to try to structure the definition around the level of risk posed. The main existing paper proposing frontier AI regulation, Anderljung et al. 2023, uses this style of definition, suggesting that frontier models are “highly capable foundation models for which there is good reason to believe could possess dangerous capabilities sufficient to pose severe risks to public safety.”
Another open question is how to handle narrow models (e.g., the protein-folding model AlphaFold), which in an advanced form could pose serious risks—but which are very different from the large, general-purpose models that are typically top of mind in conversations about frontier AI.
Ultimately, governments likely need to be able to manage risks from all of the above, though it may not be possible to capture all of these different issues within a single regulatory structure. We believe that the most feasible approach for frontier AI regulation is likely definition (1) above. This would mean focusing on the high degree of uncertainty that characterizes the trajectory of increasingly advanced general-purpose models, and structuring regulatory requirements to improve governments’ visibility into and understanding of how these models are progressing. We elaborate more on what this could look like below.
Compute thresholds could play a useful role in regulatory definitions, but are unlikely to work as a complete solution.
If the paradigm of AI development of recent years continues, novel capabilities are likely to first emerge in models that require enormous amounts of computation for development (often referred to as “training compute”). This is because, for a given type of AI model, the amount of training compute used is strongly correlated with its resultant capabilities. Compute usage is relatively straightforward to define, measure, and verify, making it an attractive way to target regulation. What’s more, today’s most compute-intensive models cost tens or hundreds of millions of dollars to train, so targeting regulation at this bracket would automatically exempt smaller developers, which is appealing from a competition perspective. Regulatory definitions based on training compute also benefit from a direct technical relationship to the underlying computer hardware, raising the possibility of aligning with regulation related to computing infrastructure, which we touch on further below.
Despite these factors, targeting frontier AI regulation purely based on compute thresholds (e.g., stipulating that any AI model that was trained with a certain level of compute is a “frontier model”) is unlikely to work as a complete solution over the longer term. For one thing, new approaches to AI development may use compute differently or less. For another, connecting compute usage to stringent regulatory requirements would create a strong incentive for AI developers to find creative ways to account for how much compute is used for a given system, in order to avoid being subject to regulation. An instructive example comes from U.S. financial regulation: requirements for financial institutions to report all transactions above $10,000 have led many individuals and businesses to break down large transactions into multiple smaller transactions (each below the threshold), in order to avoid regulatory scrutiny.
Accordingly, the workshops included a discussion of additional criteria that could be used to designate a model as a “frontier model.” Suggestions included:
- The model’s performance on specific benchmark datasets, with the intention of capturing models that are above a certain capability level (either on broad benchmarks or more specific evaluations for dangerous capabilities).
- Alternate ways of measuring the cost of developing the model, for example, financial cost or energy cost.
- The model’s level of integration into society, in the vein of Europe’s “Very Large Online Platform” designation for online platforms with a large number of users.
None of these options are perfect, but they could perhaps be combined with compute thresholds in some way to try to capture the most advanced AI models being developed at any given time. One important design choice would be whether to primarily use a highly targeted definition—such as a very high compute threshold—with additional criteria used to draw additional models into the regulatory regime, or whether to cast a broad net and give companies ways to exempt themselves if appropriate. The Digital Millennium Copyright Act is an example of the “broad net” approach, in that it creates potential liability for a wide range of companies, but allows them safe harbor if they can demonstrate responsible behavior (e.g., a notice-and-takedown system for copyrighted content).
In any of these cases, the regulator would, of course, need to be authorized to update the criteria regularly to keep pace with changes in the field in consultation with relevant experts.
Regulatory interventions to improve the U.S. government’s visibility into the AI frontier are a promising and shovel-ready place to start.
During the workshop series, participants discussed a wide range of potential regulatory obligations that could be placed on frontier AI developers. Many possible requirements would be difficult to implement in regulation at present, given factors such as:
- The lack of mature best practices in risk management for frontier AI systems.
- The lack of capacity within regulatory agencies on AI in general, and frontier AI in particular.
- The risk that imposing too large of a regulatory burden would drive AI developers to find creative ways to build systems that would not be subject to the regulatory regime.
Some or all of these concerns applied to ideas such as enforcing specific security practices, specifying how models should be assessed for dangerous capabilities, dictating what safety measures should be required to deploy a potentially risky system, or requiring licenses to train frontier AI systems.
In contrast, focusing on improving the government’s visibility into the development and use of cutting-edge AI systems appears to be a promising and relatively feasible place to start. A recent Carnegie Endowment for International Peace commentary proposed establishing a registry for large models as one potential structure to facilitate this kind of information sharing. By analogy to the requirement for corporations to register with the states in which they want to do business, registration could be required for models to be legally used, sold, or bought, generating an incentive for legitimate developers to register. Such a registry could require AI developers to report specific information about planned or completed AI systems that meet the chosen definition of “frontier AI,” as discussed above. Information to be shared securely with the regulator could include:
- The amount of training compute required by the model.
- Key information about training data and model architecture (e.g., number of parameters).
- Details of model performance (e.g., performance on benchmarks of interest and results of evaluations to detect dangerous capabilities).
- Details of risk management practices (e.g., use of the NIST AI Risk Management Framework, responsible scaling policies, or similar) and security practices in place.
The regulator could determine which information, if any, would be disclosed to the public. Transparency can play an important role in informing the public and increasing the accountability of AI developers, but can also have costs, for instance if companies were forced to disclose valuable IP related to how models are designed and trained or if publicizing security practices made a company more vulnerable to cyberattacks.
Ideally, an information-sharing-focused regulatory regime would position the U.S. government to respond more capably to future developments in AI. For example, establishing a regulatory body to manage a registry of frontier models (whether as a new agency or an office within an existing department) could act as a “seed crystal” to grow greater regulatory capacity over time. What’s more, by ensuring that the regulator has access to up-to-date information about the development, capabilities, and risk management of frontier AI, this approach would also significantly increase government’s ability to assess potential risks from increasingly advanced AI systems, and to implement a more fleshed-out regulatory regime in the future, if warranted.
Requirements around security, assessments, and risk management could be valuable, but are not yet as ripe for implementation.
While perhaps challenging to implement now, several other categories of regulatory requirements could be valuable to implement in the future if AI systems continue to advance and risk management best practices for frontier AI become more mature.
Of these categories, security requirements to protect against model theft are perhaps the closest to being “shovel ready” for regulation. Existing standards from other software domains are highly applicable to model security: the Secure Software Development Framework for secure development practices, Supply Chain Levels for Software Artifacts for software supply chain security, ISO 27001 for information security, and CMMC for cybersecurity. Red-teaming, penetration testing, insider threat programs, and model weight storage security are all also practices that could make sense in the context of high-risk models.
Additional measures on top of a “model registration” scheme, as described above, could involve mandatory pre-deployment model assessments/evaluations. Such assessments could evaluate what capabilities new models have and what kinds of risks they pose. The current lack of a specific, agreed-on suite of assessments that could be run means it would be premature to codify requirements on this front. For now, an iterative approach is likely more appropriate, in which developers are first required to report on assessments they’re running, which are then turned into specific regulatory requirements over time. “Responsible Scaling Policies” is an emerging framework for what this evolution might look like. To mitigate the risk of frontier AI developers “grading their own homework,” this approach would benefit from developing government expertise on model assessment, especially in areas where government has particular expertise (such as evaluating risks related to cybersecurity and nuclear technologies). Supporting the maturation of a third-party auditor ecosystem and investing in advancing the science of model evaluations would be valuable intermediate goals on the path towards requiring assessments of frontier models.
Other risk management practices worth investigating further are mandatory external audits, protections for internal whistleblowers at frontier AI developers, and incident-sharing programs so that important safety-related lessons from developing and deploying frontier AI systems can be shared across the industry.
Regulations on supporting computing infrastructure could be a useful complement to frontier AI regulation, but should be carefully scoped to avoid bad incentives.
Frontier AI models are currently typically trained using centralized, tightly connected clusters of leading-edge chips in large data centers run by a small number of companies. This means that at present, it could be relatively straightforward to place requirements on the providers of this computing infrastructure, which could complement rules for AI developers by helping to identify models that should be subject to reporting requirements.
There is a great deal of uncertainty, however, about how feasible it would be for AI developers to use alternative infrastructure if there were a strong incentive to do so. If the use of concentrated clusters of leading-edge chips were subject to onerous regulatory requirements, companies might instead find ways to train frontier AI systems on more decentralized and/or less advanced chips, which would be much more difficult to capture within a targeted regulatory approach. Any plan to design requirements for AI infrastructure providers would need to carefully calibrate regulatory burden to ensure that there is not a strong incentive to skirt around requirements.
Relatively lightweight obligations for infrastructure providers could include applying Know-Your-Customer rules for users of very large quantities of compute, requiring such customers to make attestations about how they plan to use compute, and legally codifying security best practices that are already typical in large data centers.
Frontier AI regulation would tackle only a small fraction of the policy challenges posed by AI, and should not be seen as a complete solution.
This post, like the workshop series, focuses on regulatory approaches that could reduce risks from increasingly advanced AI systems being developed at the cutting edge of the field. While we believe that this set of issues is important and deserving of regulatory attention, we emphasize that it is only one specific piece of the broader challenges posed by AI. Many harms and risks of AI—such as algorithmic discrimination, disinformation, job displacement, and others—do not primarily arise from the most advanced AI systems, and therefore would be neglected if policymakers were to solely focus on frontier AI regulation.
Conclusion
Overall, we came away from the workshop series with the belief that there are productive steps that governments could take today to manage potential risks from frontier AI. We hope that this post helps to move that conversation forward in a constructive direction. We are very grateful to workshop participants for their time and insights, and look forward to further discussing these issues in the future.
— —
Helen Toner is director of strategy and foundational research grants at CSET, and also serves in an uncompensated capacity on the non-profit board of directors for OpenAI.
Timothy Fist is a fellow at the Center for a New American Security.