Executive Summary
For decades, scientists have speculated about the possibility of machines that can improve themselves. Today, artificial intelligence (AI) systems are increasingly integral parts of the research pipeline at leading AI companies. Some observers see this as evidence that fully automated AI research and development (R&D) is on the way, potentially leading to a rapid acceleration of AI capabilities and impaired ability for humans to understand and control AI. Others see the use of AI for research as a mundane extension of existing software tools.
This Workshop Report shares findings and conclusions from an expert workshop CSET hosted in July 2025. The workshop covered a range of issues related to automation of AI R&D. In this report, ‘AI R&D’ refers to scientific and engineering work that improves the capabilities of AI systems and ‘AI R&D automation’ refers broadly to any use of AI that accelerates progress in AI R&D.
Key takeaways from the workshop were as follows:
- Increasingly automated AI R&D is a potential source of major strategic surprise. While experts disagree on likelihood, scenarios are possible in which AI R&D becomes highly automated, the pace of AI R&D accelerates dramatically, and the resulting systems pose extreme risks. This warrants preparatory action now.
- Frontier AI companies are already using AI to accelerate AI R&D, and usage is increasing as AI models get more advanced. New models are often used internally to advance AI R&D before they are released to the public.
- Experts’ views differ on how rapid and impactful AI R&D automation is likely to be. Even if the use of AI in AI R&D continues to increase, there is no consensus on whether AI progress is more likely to accelerate or plateau. What’s more, because different views are associated with different assumptions about how AI R&D works, new data on how AI R&D automation is progressing in practice may be insufficient to resolve conflicting perspectives. It thus may be difficult to either detect or rule out extreme ‘intelligence explosion’ scenarios in advance.
- Despite challenges in interpreting new evidence, better access to indicators of progress in AI R&D automation would be valuable. Existing empirical evidence, including existing benchmark evaluations, is insufficient for measuring, understanding, and forecasting the trajectory of automated AI R&D. More systematic collection of existing indicators—as well as developing ways of gathering new indicators—could provide a significantly clearer picture.
- Thoughtfully designed transparency efforts could improve access to valuable empirical information about AI R&D automation, which at present is almost fully dependent on patchy, voluntary releases of information from companies. While some early transparency mandates on frontier AI development have recently been enacted, they do not focus on indicators of progress in AI R&D automation. Policymakers have a range of options for how to increase visibility of these indicators.
The full report elaborates on these takeaways, including providing examples of how frontier AI companies are using AI for R&D, delving into experts’ differing views and assumptions, suggesting priority indicators to track, and laying out policy options and implications.