Executive Summary
Recent advances in artificial intelligence (AI), particularly around large language models (LLMs), have made OpenAI’s Generative Pre-trained Transformers (GPTs), Google’s Gemini, Anthropic’s Claude, and Meta’s Llama household names. The sequential releases of foundation models from a small number of big players tell a linear story of innovation, but history suggests a more complex narrative. Just as electricity in the Second Industrial Revolution would eventually spur more reliable and distributed power systems, foundation models are paving the way for what some researchers have termed “compound AI systems.” These systems encompass a set of distinct components, including at least one AI model and other components that manipulate the data in ways not learned by the model.* The components may aggregate multiple calls to a model, insert context with retrievers, or extend capabilities with tool use. Compound AI systems improve performance on tasks from the base model and can enable the model to perform entirely new tasks.
Consider an autonomous vehicle. The vehicle itself is a complex system of systems, including power, propulsion, sensor systems, and more. The vehicle is not substitutable for a single AI model, but subsystems involve model and compound system design decisions. For example, some manufacturers do not include LiDAR (Light Detection and Ranging) in their self-driving vehicles, opting solely for cameras, while others use a sensor suite, including cameras, LiDAR, and radar. Processing these signals for a task, such as object detection, could take at least two approaches: Train separate networks for each sensor or sensor type and combine the predictions (a compound AI system approach) or train a single model that fuses all sensor inputs (an AI model approach). At the level of object detection, the model and compound system approaches can be direct substitutes. This interchangeability allows lessons from one approach to spur improvements in the other. New model advances could enable compound systems to take advantage of that capability; a compound system approach could add a new sensor network, which might inspire a more efficient single network data fusion model that shares representations between sensors for faster object detection.
System-to-model innovation is an emerging innovation pathway that has driven progress in several prominent areas over the last decade. System-to-model advances include DeepMind’s incorporation of policy and value calculations in a single network with AlphaGo Zero; chain-of-thought prompting leading to OpenAI’s o1 model; the OneGen single pass network and Cohere’s Command R family models in retrieval-augmented generation (RAG); and circuit breakers for AI safety training in language models. System-to-model innovations close the feedback loop between model and compound system advances, opening frontier AI breakthroughs to more diverse contributions beyond the top labs.
Instead of one model-to-model pathway for progress, recent trends highlight the dynamic interplay between AI model and compound system innovation, where progress along one pathway leads to breakthroughs in the other pathway.
Figure A: AI Model and System Innovation Matrix

Policies that foster AI progress are likely to benefit model and compound system developers, but today’s debates have focused more on improving model capabilities. If recent system-to-model advances are indicative of future trends, then policymakers should also devise tailored solutions for compound system developers to encourage multiple bets on the future of AI. System-level innovations advance with the diffusion of AI and expand the base of contributors to leading-edge progress in the field. While the spread of general-purpose technologies across societies and economies is often more consequential than the innovations themselves, the diffusion of AI will not happen in a geopolitical vacuum.1 Countries that can identify and harness system-level innovations faster and more comprehensively will gain crucial economic and military advantages over competitors. This issue brief suggests a three-part framework to navigate the policy implications of system-to-model innovation:
Protect smaller companies as incubators of tacit knowledge that could seed future foundation model advances. Governments may need to consider subsidized services, such as safety testing and red team assessments at a system level, for smaller and medium-sized enterprises that lack the resources and expertise to remain compliant with the evolving policy landscape or withstand adversarial attacks. This process should include a detailed mapping of the companies, infrastructure, research organizations, data analytics firms, and technical and nontechnical talent base that comprise U.S. and allied compound AI system innovation networks to determine the actors involved and resources needed for protection.
Diffuse innovation outside of leading foundation model providers by providing models and inference compute access to develop new systems. The National Artificial Intelligence Research Resource (NAIRR) should expand programs that widen access to computational infrastructure not only for training AI models but also for inference. Such an approach would enable researchers using pre-trained foundation models to develop system-level innovations that drive progress in AI models and benefit from the new inference-time compute scaling paradigm. For example, the research community for prompt engineering is large and diverse and frequently produces new techniques that improve model performance on downstream tasks. Policymakers could facilitate the diffusion of AI by supporting work outside of the top research groups to improve future iterations of foundation models.
Anticipate future innovations by monitoring countries diffusing AI models into systems and by adapting tools and methodologies for horizon scanning, technology forecasting, and supply chain risk analysis. A system-to-model mindset could encourage a new generation of research and development agreements with U.S. allies and partners. Research communities promoting the diffusion of compound system innovations would also serve as early indicators of future capabilities in frontier models.
As with the rise of electricity in the Second Industrial Revolution, the transformation from market competition to regulated monopolies in power generation represents one possible future for foundation models and their cloud infrastructure providers. Alternatively, foundation models may continue to become increasingly commoditized. How leaders in government, industry, and civil society negotiate complex tradeoffs in innovation, security, and values-based principles will have structural implications for the distribution of resources that make up the AI stack.2 Industry leaders are already implementing novel governance frameworks, while governments are pursuing myriad regulatory approaches.3 Understanding innovation cycles will be critical as policymakers look to shape the trajectory of AI in directions that benefit the public good.
Download Full Report
AI System-to-Model Innovation— —
* More than one component may be an AI model, learned separately.
- Jeffrey Ding, Technology and the Rise of Great Powers: How Diffusion Shapes Economic Competition (Princeton: Princeton University Press, 2024); Nicholas Crafts, “Artificial Intelligence as a General-Purpose Technology: An Historical Perspective, Oxford Review of Economic Policy 37, Issue 3 (Autumn 2021): 521-536), https://academic.oup.com/oxrep/article/37/3/521/6374675?login=false.
- “OECD AI Principles Overview,” OECD, https://oecd.ai/en/ai-principles.
- “Responsible Scaling Policy: Version 2.1”, Anthropic, effective March 31, 2025, https://www-cdn.anthropic.com/17310f6d70ae5627f55313ed067afc1a762a4068.pdf; “Consultation Paper on AI Regulation Emerging Approaches Across the World,” UNESCO, 2024, https://unesdoc.unesco.org/ark:/48223/pf0000390979.