In the aftermath of the Second World War, the United States cornered the market on the world’s most destructive technology: nuclear weapons. The Roosevelt and Truman administrations could only peer dimly into the future, but they could see far enough to grasp the need for novel governance arrangements to manage the risks of new technologies in the hands of old rivals.
Presidential adviser and financier Bernard Baruch put forward a proposal to control the spread of nuclear weapons. Under the Baruch Plan, nations would agree to work with an International Atomic Development Authority to control atomic energy, manage nuclear materials and conduct inspections. Baruch understood that implementing such a plan would require ‘more than words’. As he put in his presentation to the UN Atomic Energy Commission in 1946, ‘Before a country is ready to relinquish any winning weapons … it must have a guarantee of safety, not only against the offenders in the atomic area but against the illegal users of other weapons – bacteriological, biological, gas – perhaps – why not! –against war itself.’
The plan was grounded in higher ideals, but it came up against Cold War realities. While tensions with the Soviet Union prevented the plan’s adoption, the effort to anticipate risks and establish cooperative institutions to manage emerging technologies is more relevant today than at any time in the last quarter century. Rapid technological advances – from artificial intelligence and robotics to synthetic biology and additive manufacturing – will transform the global economy and reshape political and military relationships in the international system.
Artificial intelligence presents a thorny challenge for governance. It is a general-purpose technology, similar to electricity, but it relies on specialised talent, hardware and software.
There are three levels of analysis to consider: the characteristics of AI hardware and software, the interactive effects between hardware and software, and the system-level consequences. The ability to shape the preferences and influence the behaviour of other actors will operate differently at each level. All this suggests that governance arrangements for AI will need to take into account new geographies and not just two, but many faces of power.
Start with the characteristics of AI hardware and software. The semiconductor supply chains that are critical for building the computing power to run AI systems are not spread evenly throughout the world. They are concentrated in a few states, most of which are US allies. The costs of building semiconductor fabrication plants are high, which means that countries such as China cannot easily leap ahead in the semiconductor industry.
What matters is not simply the costs of the fabrication plants, but also their complexity and the knowledge to design and build them. While China is making a concerted push to dominate AI chip technology, as research shows, firms in the United States, the Netherlands and Japan have a chokehold on production of semiconductor manufacturing equipment that is necessary for making AI chips. Experts have noted that when it comes to semiconductors, geography is a fixed asset – at least for the foreseeable future. That means governance arrangements need to account for the unique characteristics of semiconductor supply chains and devise nimble strategies for controlling and managing the various components that go into semiconductor fabrication plants, such as lithography equipment.
Where hardware is an expensive and relatively fixed asset, software is a cheap and freely floating one. Open-source frameworks, such as Google’s TensorFlow, are widely available and the barriers to entry are low. As a result, we can expect innovations to proliferate rapidly. The AI research company OpenAI’s decision to refrain from publishing the full language model of its GPT -2 algorithm because of concerns about malicious use illustrates the openness of software. AI-enabled systems will alter the character of existing threats and introduce new threats that are more efficient, targeted and scalable.
Governance arrangements must account for a proliferation of new actors, each operating with greater influence and generating innovations that diffuse more widely and empower more people than ever before. The private sector drives much of the spending and research on AI. Governance arrangements for AI will need to adapt to a world in which commercial interests and multinational corporations play an important role.
At the next level of analysis, governance arrangements should consider the interactive effects between the hardware and software requirements for designing and implementing AI systems. Here the story is four possible worlds.
In the OECD world, with the help of the new Principles on AI, the United States and its allies retain their lead over China in talent, hardware and software. In this scenario, the United States and its allies band together to control access to semiconductor supply chains and maintain their dominance of software frameworks, such as TensorFlow, the Microsoft Cognitive Toolkit and Facebook’s Pytorch. China’s limited progress in developing alternative software frameworks is indicative of this trend. The OECD world is an open one, where the majority of nations develop and deploy AI technologies in line with liberal democratic values.
In the Made in China world, significant investments in the semiconductor industry, especially manufacturing equipment and AI chip designs, give China a lead in next-generation hardware for AI. New software frameworks and talent programmes enable China to attract the best and brightest from around the globe to power its AI revolution. In this scenario, surveillance and censorship technologies proliferate and liberal democratic values are on the defensive.
In the Shenzhen world, China hosts the next Silicon Valley and makes significant advances in software and talent, while lagging behind on semiconductor manufacturing equipment and specialised chips for powering AI systems. In this scenario, open and authoritarian societies seek to outbid each other on human talent, clash over norms and standards, and compete for influence in the technology sectors of emerging markets.
Finally, in the Silicon Valley world, the United States retains its lead in top talent and software, but falls behind as China’s investments in AI chips bear fruit. In this scenario, rival technological spheres of influence create new tensions and complicate relations with countries on the fault lines of a new geopolitical map.
Each scenario represents a different configuration of power, and each scenario requires different governance arrangements to manage the risks and foster cooperation. AI will make power ever more contingent on the scope and domain of its use. While each scenario is possible, and the future may combine elements of each, the likely outcome is that the United States and its allies retain their lead in most areas of AI, while China advances in others. A crowded field of actors, both of the state and non-state varieties, will also shape the trajectory of AI’s development and the distribution of power among nations.
The implications for governance are twofold: new institutional arrangements will need to foster collaboration amid competition and vary in their design by scope and domain. The United States and China are intertwined economically in ways that the United States and Soviet Union never were during the Cold War. Historical analogies have their limits but at least one lesson from the decades-long US–Soviet competition deserves attention today: the United States and China will need to find ways to maintain cooperative relationships despite growing mutual suspicion and tensions on both sides. The Pugwash conferences and Apollo–Soyuz projects of the Cold War recognised that while scientific and technological collaboration would not end the rivalry between the United States and Soviet Union, such endeavours would encourage greater mutual understanding and therefore make conflict less likely.
Governance arrangements will also need to reflect the concentration and dispersion of power that developments in AI are likely to produce. Institutions set boundaries on competition and encourage collaboration in areas of shared interest. Advances in hardware and software will come in fits and starts and favour some nations over others. No one governance arrangement will suffice, instead states, companies and non-state actors will need to work together to devise flexible arrangements that can adapt to a rapidly changing technological landscape.
The final level of analysis that any governance arrangements for AI need to consider is the systems level. And here, the watchword is uncertainty. No future scenario is guaranteed, and many of the next-generation advances in software and computing power will have unpredictable consequences at scale. Complex systems yield unintended effects, and the interaction of software and hardware in the development of AI systems will make it increasingly difficult for nations to calculate the balance of power in key domains.
The challenge for governance of AI is to seize the opportunities that new technologies present, while managing today’s risks and anticipating those of tomorrow. Like Roosevelt and Truman, we can only see into the future dimly. But the emerging picture compels us to begin planning now for a world in which no one map captures the AI terrain.