CyberAI

CSET submitted the following comment in response to a Request for Comment (RFC) from the Office of Management and Budget (OMB) about a draft memorandum providing guidance to government agencies regarding the appointment of Chief AI Officers, Risk Management for AI, and other processes following the October 30, 2023 Executive Order on AI.

There’s a lot to digest in the October 30 White House’s AI Executive Order. Our tracker is a useful starting point to identify key provisions and monitor the government’s progress against specific milestones, but grappling with the substance is an entirely different matter. This blog post, focusing on Section 4 of the EO (“Developing Guidelines, Standards, and Best Practices for AI Safety and Security”), is the first in a series that summarizes interesting provisions, shares some of our initial reactions, and highlights some of CSET’s research that may help the USG tackle the EO.

In a KCBS Radio segment that explores the rapid rise of AI and its potential impact on the 2024 election, CSET's Josh Goldstein provides his expert insights.

The Executive Order on Safe, Secure, and Trustworthy AI: Decoding Biden’s AI Policy Roadmap

Ronnie Kinoshita, Luke Koslosky, and Tessa Baker
| May 3, 2024

On October 30, 2023, the Biden administration released its long-awaited Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. CSET has broken down the EO, focusing on specific government deliverables. Our EO Provision and Timeline tracker lists which agencies are responsible for actioning EO provisions and their deadlines.

What Does AI Red-Teaming Actually Mean?

Jessica Ji
| October 24, 2023

“AI red-teaming” is currently a hot topic, but what does it actually mean? This blog post explains the term’s cybersecurity origins, why AI red-teaming should incorporate cybersecurity practices, and how its evolving definition and sometimes inconsistent usage can be misleading for policymakers interested in exploring testing requirements for AI systems.

Skating to Where the Puck Is Going

Helen Toner, Jessica Ji, John Bansemer, and Lucy Lim
| October 2023

AI capabilities are evolving quickly and pose novel—and likely significant—risks. In these rapidly changing conditions, how can policymakers effectively anticipate and manage risks from the most advanced and capable AI systems at the frontier of the field? This Roundtable Report summarizes some of the key themes and conclusions of a July 2023 workshop on this topic jointly hosted by CSET and Google DeepMind.

Techniques to Make Large Language Models Smaller: An Explainer

Kyle Miller and Andrew Lohn
| October 11, 2023

This explainer overviews techniques to produce smaller and more efficient language models that require fewer resources to develop and operate. Importantly, information on how to leverage these techniques, and many of the subsequent small models, are openly available online for anyone to use. The combination of both small (i.e., easy to use) and open (i.e., easy to access) could have significant implications for artificial intelligence development.

Memory Safety: An Explainer

Chris Rohlf
| September 26, 2023

Memory safety issues remain endemic in cybersecurity and are often seen as a never-ending source of cyber vulnerabilities. Recently the topic has increased in prominence with the White House Office of the National Cyber Director (ONCD) releasing a request for comments on how to strengthen the open-source ecosystem. But what exactly is memory safety? This blog describes the historical antecedents in computing that helped create one aspect of today’s insecure cyber ecosystem. There will be no quick fixes, but there is encouraging progress towards addressing these long-standing security issues.

Universities can build more inclusive computer science programs by addressing the reasons that students may be deterred from pursuing the field. This blog post explores some of those reasons, features of CS education that cause them, and provides recommendations on how to design learning experiences that are safer and more exploratory for everyone.

Onboard AI: Constraints and Limitations

Kyle Miller and Andrew Lohn
| August 2023

Artificial intelligence that makes news headlines, such as ChatGPT, typically runs in well-maintained data centers with an abundant supply of compute and power. However, these resources are more limited on many systems in the real world, such as drones, satellites, or ground vehicles. As a result, the AI that can run onboard these devices will often be inferior to state of the art models. That can affect their usability and the need for additional safeguards in high-risk contexts. This issue brief contextualizes these challenges and provides policymakers with recommendations on how to engage with these technologies.