Recent advances in generative AI, including the popularity of ChatGPT, have focused interest on the guardrails that need to be enacted to protect society and secure systems while spurring continued innovation. U.S. policymakers have held high-profile congressional hearings, written front page op-eds, and conducted White House summits with industry leadership. Many are wondering what form the next concrete actions should take.
While AI regulation is having its moment, a closely related area is that also merits attention is AI safety and security, which can include any number of topics ranging from determining liability for harm caused by AI to auditing AI systems before they are deployed to developing internationally agreed-upon guardrails on when and how AI will be used.
In this post—the first of in a series—CSET has collected recently published resources to help policymakers and the general public learn more about AI regulation. This post focuses on large language models (LLMs), human-AI interactions, and existing U.S. frameworks and guidelines.
In some cases, we have also linked to CSET publications on the topic or asked our team to provide additional context and analysis based on their expertise. We hope readers find this resource useful. Inclusion in this list is not necessarily a CSET endorsement of the views in the piece itself.
Future posts in this series will explore potential AI governance approaches, current and potential military applications of AI, common practices in AI assessment, and other topics of interest.
Resources on Proposed U.S. or International Governance Bodies/Structures
- Governance of Superintelligence, a May 2023 blog post by executives at OpenAI proposing a model similar to the International Atomic Energy Agency (IAEA) for regulating superintelligence.
- How can Congress regulate AI? Erect guardrails, ensure accountability and address monopolistic power, a May 2023 op-ed authored by Anjana Susarla, Professor at Michigan State University, discussing Congress’ role in the regulation of AI systems.
AI, Generative AI and Large Language Model 101s
- What Are Generative AI, Large Language Models, and Foundation Models?, a May 2023 blog post published by CSET.
- What exactly is ‘responsible AI’ in principle and in practice?, a May 2021 event hosted by Brookings Institution.
- OECD Framework for the Classification of AI systems, a tool to assist in evaluating AI systems published by the Organization for Economic Co-operation and Development in February 2022.
- Talking About Large Language Models, a February 2023 paper by Murray Shanahan.
- Eight Things to Know About Large Language Models, an April 2023 paper by Samuel Bowman.
- Who are we talking to when we talk to these bots?, a February 2023 blog post by Colin Fraser.
- The Illusion of China’s AI Prowess, a June 2023 op-ed authored by Helen Toner, Jenny Xiao, and Jeffrey Ding in Foreign Affairs.
Human-AI Interactions
- The Luring Test: AI and the engineering of consumer trust, a blog post authored by Michael Atleson and published by the U.S. Federal Trade Commission to provide guidance on responsible activity at the intersection of generative AI and advertising.
- Performative Power, a paper authored by Moritz Hardt, Meena Jagadeesan, and Celestine Mendler-Dünner. This paper was published as part of the 2022 NeurIPS conference and introduces a method for measuring an algorithm’s ability to persuade users.
- Artificial Influence: An Analysis Of AI-Driven Persuasion, a paper authored by Matthew Burtell and Thomas Woodside examining how AI may be able to persuade users.
- Co-Writing with Opinionated Language Models Affects Users’ Views, a February 2023 paper by Maurice Jakesch, Advait Bhat, Daniel Buschek, Lior Zalmanson, Mor Naaman that evaluates how much language model writing assistants influenced users opinions in their subsequent writings.
- Trusted Partners, a February 2021 CSET issue brief authored by Dr. Margarita Konaev, Tina Huang and Husanjot Chahal discussing human-machine teaming and the future of military AI.
Selected U.S. Government AI Frameworks and Guidelines
- Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, a February 2023 declaration released by the U.S. State Department to advance international consensus on best practices for responsible military AI development and use.
- Department of Defense Directive 3000.09 on Autonomy in Weapon Systems, DOD’s policy and responsibilities for developing and using autonomous and semi-autonomous functions in weapon systems, last updated in January 2023.
- Defense Innovation Unit Responsible AI Guidelines, published by the DIU to guide Department of Defense AI vendors in the development of systems in line with the DOD’s ethical principles for AI.
- Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities, the U.S. Government Accountability Office’s 2021 report outlining accountability practices for the U.S. Government’s use of AI.
- Department of Energy Risk Management Playbook, DOE’s reference guide to identify and manage risk in the development of AI systems.
- AI Risk Management Framework, a voluntary tool published by National Institute of Standards and Technology in January 2023 to manage risk associated with AI systems.
- Blueprint for an AI Bill of Rights, the White House Office of Science and Technology Policy’s October 2022 release outlining five principles that should govern the development and use of AI systems.
- U.S. Department of Defense Responsible Artificial Intelligence Strategy and Implementation Pathway, the DOD’s June 2022 guidance for operationalizing the use of AI in U.S. military operations.
- This item built upon the February 2020 Ethical Principles for AI and the May 2021 Implementing Responsibility AI memos.
- Executive Order 13859: Maintaining American Leadership in Artificial Intelligence, a February 2019 White House Executive Order outlining policies and principles for supporting U.S. leadership in AI and related standards.
- Executive Order 13960: Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, a December 2020 White House Executive Order establishing principles to govern U.S. federal agency adoption and deployment of AI systems.
- NOTEWORTHY: DoD Autonomous Weapons Policy, a February 2023 analysis published by the Center for a New American Security and authored by Paul Scharre breaking down DOD’s January 2023 update to its policy on autonomous weapons systems.
- Big Tech Goes to War, an op-ed published by CSET’s Emelia Probasco and former Deputy Secretary of Defense (Acting) Christine Fox discussing the tech sector’s role in conflict, with a specific focus on its involvement in Ukraine.
We hope you find these resources helpful in thinking through regulatory approaches to AI. In our next post, we will highlight selected international AI regulation efforts, potential governance approaches, and common practices in AI assessment.