Recent advances in generative AI, including the popularity of ChatGPT, have focused interest on the guardrails that need to be enacted to protect society and secure systems while spurring continued innovation.
CSET is publishing a series of posts to help policymakers and the general public learn more about AI regulation. Part 1 of the series focused on large language models (LLMs), human-AI interactions, and existing U.S. frameworks and guidelines. In this second post, we provide background materials on international AI regulatory efforts, AI licensing and intellectual property issues, and potential AI assessment approaches.
Selected International AI Regulation Efforts
Many countries and international organizations around the world have published binding and non-binding documents on AI governance. This is a non-exhaustive list of recent initiatives and related expert commentary.
- China’s New AI Governance Initiatives Shouldn’t Be Ignored, a January 2022 commentary authored by Matt Sheehan and published by the Carnegie Endowment for International Peace
- What China’s Algorithm Registry Reveals about AI Governance, a December 2022 commentary authored by Matt Sheehan and Sharon Du and published by the Carnegie Endowment for International Peace
- How will China’s Generative AI Regulations Shape the Future?, an April 2023 blog post published by Stanford’s DigiChina
- The European Union’s Artificial Intelligence Act, a regulatory proposal establishing rules for AI, initially proposed in 2021.
- AI Act: a step closer to the first rules on Artificial Intelligence, a May 2023 news release from the European Union following committee approval of the EU AI Act
- Social Principles of Human-Centric AI, published by the Government of Japan in March 2019 to guide implementation of AI in society
- Governance Guidelines for Implementation of AI Principles, guidelines published by the Government of Japan in January 2022 to inform implementation of the Social Principles of Human-Centric AI
- Directive on Automated Decision-Making, published by the Government of Canada in March 2019 to govern deployment of automated decision-making tools by government agencies
- Responsible Use of Artificial Intelligence, website developed by the Government of Canada to collate resources related to the Directive on Automated Decision Making to enable the responsible use of AI by government agencies
- United Kingdom Ministry of Defense Artificial Intelligence Strategy, a June 2022 policy paper published by the Government of the United Kingdom outlining its defense-related AI strategy
- A pro-innovation approach to AI regulation, a policy paper published in March 2023 by the Government of the United Kingdom to outline a governance approach to AI that protects society while promoting innovation
- ICO GDPR Guidance on Artificial Intelligence, resources provided by the Government of the United Kingdom to help industry navigate regulatory requirements related to AI and the European Union’s General Data Protection Regulation
- AI Ethics Principles, published by the Government of Australia in November 2019 to lay out eight principles for the development and use of AI
- National AI Centre, established by the Government of Australia to help develop Australia’s AI and digital ecosystem
(Many) More International Regulation Efforts
- OECD AI Policy Observatory, OECD’s effort to monitor global AI policy activity and inform multilateral AI efforts
- Responsible and Ethical Military AI Guidelines, an August 2021 CSET issue brief by Zoe Stanley-Lockman reviewing U.S. allied approaches to the military use of AI.
We hope you find these resources helpful in thinking through regulatory approaches to AI and potential governance approaches. Inclusion in this list is not necessarily a CSET endorsement of the views in the piece itself. In our next post, we will highlight resources on AI licensing, intellectual property rights challenges, and approaches to assessing AI systems.