The October 30, 2023, White House executive order on artificial intelligence requires companies developing the most advanced AI models to report safety testing results to the federal government. CSET Horizon Junior Fellow Thomas Woodside writes that these requirements are a good first step towards managing uncertain risks and Congress should consider codifying them into law.
In an Australian Broadcasting Corporation 7.30 segment that discusses the concerns about the current understanding of AI technology, Helen Toner provided her expert insights.
U.S. technology companies have become important actors in modern conflicts, and several of them have meaningfully contributed to Ukraine’s defense. But many of these companies are deeply entangled with China, potentially complicating their decision-making in a potential Taiwan contingency.
In an op-ed article published in Lawfare, CSET’s Lauren Kahn discusses the increasing integration of Artificial Intelligence (AI) in military operations globally and the need for effective governance to avoid potential mishaps and escalation.
This guide provides a run-down of CSET’s research since 2019 for first-time visitors and long-term fans alike. Quickly get up to speed on our “must-read” research and learn about how we organize our work.
In an op-ed published in Breaking Defense, CSET Space Force Fellow, Michael O'Connor, delves into the increasing concerns about AI safety and the need for rigorous safety testing before AI systems are deployed.
On October 30, 2023, the Biden administration released the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI). This blog contemplates a potential lack of balance in the policy discussions between the restrictions that come with governance and the promise of opportunity unleashed by innovation. Is it possible to apply AI standards, governance, safeguards and protections in a manner that stimulates innovation, competitiveness and global leadership for the United States in AI?
There’s a lot to digest in the October 30 White House’s AI Executive Order. Our tracker is a useful starting point to identify key provisions and monitor the government’s progress against specific milestones, but grappling with the substance is an entirely different matter. This blog post, focusing on Section 4 of the EO (“Developing Guidelines, Standards, and Best Practices for AI Safety and Security”), is the first in a series that summarizes interesting provisions, shares some of our initial reactions, and highlights some of CSET’s research that may help the USG tackle the EO.
This website uses cookies.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.