Many countries view artificial intelligence (AI) as critical to economic competitiveness and national security. As a result, sovereign AI—the idea that national governments should develop, control, and govern AI in order to boost economic growth, guarantee security, and ensure strategic autonomy—has become a key strategic consideration in the global AI buildout. Amid the push for sovereign AI, countries are racing to set up their own domestic compute infrastructure, curate local datasets, and train or fine-tune their own advanced large language models (LLMs).
Despite the popularity of the term “sovereign AI,” it remains ill-defined, and each country’s efforts in this area embrace varying levels of sovereignty. For some governments, sovereignty means controlling the full hardware-software stack. For others, it is the ability to develop AI models trained on domestic datasets to reduce dependence on foreign companies.
This paper interrogates the concept of “sovereign AI” through the lens of the technology stack: (1) compute infrastructure, (2) data, and (3) models. By analyzing the geopolitical dynamics at each layer, we argue that full sovereignty is economically and technologically infeasible for most nations, while a “hybrid sovereignty” model is emerging globally. Therefore, interdependence will continue to be the norm rather than the exception. For countries seeking to secure reliable AI access, this poses a potential challenge in an increasingly unstable international environment, as dependence on foreign companies means that access can be conditioned, restricted, or revoked in response to geopolitical shifts.
To read the full compendium, visit the Oxford Semiconductor Conference (OSC).