Foundational Research Grants (FRG) supports the exploration of foundational technical topics that relate to the potential national security implications of AI over the long term. In contrast to most CSET research, which is performed in-house by our team of fellows and analysts, FRG funds external projects by technical teams. The program aims to advance our understanding of underlying technical issues to shed light on questions of interest from a strategic or policy perspective.
Current areas of interest include:
- AI assurance for general-purpose systems in open-ended domains: Machine learning systems are rapidly becoming larger, more complex, more capable, and more general-purpose. Existing assurance approaches for systems with automated or autonomous capabilities do not appear to be well suited to the kinds of large-scale deep learning systems that are currently being developed and deployed. FRG is interested in whether—and how rapidly—assurance approaches are likely to be developed that are suitable for such systems both now and for the long term.
- Technical tools for external scrutiny of AI: As AI’s impact on the world grows, so does the need for external scrutiny of privately held AI systems, to ensure that they are being developed and used in safe and ethical ways. But granting access to outsiders has ramifications for the privacy, security, and intellectual property of AI developers. There are early indications that different technical methods—including approaches using privacy-preserving tools and hardware-based features—can help reduce these tensions. FRG is interested in investigating how well these tools can work in practice.
- Frontier AI risks and regulations: The term “frontier AI” has begun to be used to refer to general-purpose AI systems that are at or just beyond the current cutting edge. These systems raise a range of questions and policy challenges, which FRG is interested to explore.
- AI security and nonproliferation: As AI systems become more capable, it will be important that their developers are able to prevent unauthorized actors from accessing or using them. FRG is interested in supporting work that could make this more feasible.
FRG grantees include:
- Anthony Corso and Mykel Kochenderfer at Stanford University, for work on the progress and outlook for the reliability of AI systems; and
- The Python Software Foundation, for work to improve security incident reporting infrastructure for the Python Package Index.
- OpenMined, for work supporting the Christchurch Call Initiative on Algorithmic Outcomes.
- The Center for AI Safety, for work on measuring and mitigating AI deception.
- Peter Henderson at Princeton University, for work on the safety of different model release strategies.
- Tim G. J. Rudner at New York University, for work on quantifying uncertainty in large language models.
Calls for research ideas:
- Currently in progress: Expanding the toolkit for frontier model releases
- Submissions closed July 3, 2024.
- Full details [PDF]
- [Closed] AI assurance for general-purpose systems in open-ended domains: Full details [PDF]
FRG is directed by Helen Toner, with support from Andrea Guerrero and Kendrea Beers.