CSET submitted the following comment in response to a Request for Comment (RFC) from the Office of Management and Budget (OMB) about a draft memorandum providing guidance to government agencies regarding the appointment of Chief AI Officers, Risk Management for AI, and other processes following the October 30, 2023 Executive Order on AI.
There’s a lot to digest in the October 30 White House’s AI Executive Order. Our tracker is a useful starting point to identify key provisions and monitor the government’s progress against specific milestones, but grappling with the substance is an entirely different matter. This blog post, focusing on Section 4 of the EO (“Developing Guidelines, Standards, and Best Practices for AI Safety and Security”), is the first in a series that summarizes interesting provisions, shares some of our initial reactions, and highlights some of CSET’s research that may help the USG tackle the EO.
In a KCBS Radio segment that explores the rapid rise of AI and its potential impact on the 2024 election, CSET's Josh Goldstein provides his expert insights.
On October 30, 2023, the Biden administration released its long-awaited Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. CSET has broken down the EO, focusing on specific government deliverables. Our EO Provision and Timeline tracker lists which agencies are responsible for actioning EO provisions and their deadlines.
“AI red-teaming” is currently a hot topic, but what does it actually mean? This blog post explains the term’s cybersecurity origins, why AI red-teaming should incorporate cybersecurity practices, and how its evolving definition and sometimes inconsistent usage can be misleading for policymakers interested in exploring testing requirements for AI systems.
Helen Toner, Jessica Ji, John Bansemer, and Lucy Lim
| October 2023
AI capabilities are evolving quickly and pose novel—and likely significant—risks. In these rapidly changing conditions, how can policymakers effectively anticipate and manage risks from the most advanced and capable AI systems at the frontier of the field? This Roundtable Report summarizes some of the key themes and conclusions of a July 2023 workshop on this topic jointly hosted by CSET and Google DeepMind.
This explainer overviews techniques to produce smaller and more efficient language models that require fewer resources to develop and operate. Importantly, information on how to leverage these techniques, and many of the subsequent small models, are openly available online for anyone to use. The combination of both small (i.e., easy to use) and open (i.e., easy to access) could have significant implications for artificial intelligence development.
Memory safety issues remain endemic in cybersecurity and are often seen as a never-ending source of cyber vulnerabilities. Recently the topic has increased in prominence with the White House Office of the National Cyber Director (ONCD) releasing a request for comments on how to strengthen the open-source ecosystem. But what exactly is memory safety? This blog describes the historical antecedents in computing that helped create one aspect of today’s insecure cyber ecosystem. There will be no quick fixes, but there is encouraging progress towards addressing these long-standing security issues.
Universities can build more inclusive computer science programs by addressing the reasons that students may be deterred from pursuing the field. This blog post explores some of those reasons, features of CS education that cause them, and provides recommendations on how to design learning experiences that are safer and more exploratory for everyone.
Artificial intelligence that makes news headlines, such as ChatGPT, typically runs in well-maintained data centers with an abundant supply of compute and power. However, these resources are more limited on many systems in the real world, such as drones, satellites, or ground vehicles. As a result, the AI that can run onboard these devices will often be inferior to state of the art models. That can affect their usability and the need for additional safeguards in high-risk contexts. This issue brief contextualizes these challenges and provides policymakers with recommendations on how to engage with these technologies.
This website uses cookies.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.