Applications

The October 30, 2023, White House executive order on artificial intelligence requires companies developing the most advanced AI models to report safety testing results to the federal government. CSET Horizon Junior Fellow Thomas Woodside writes that these requirements are a good first step towards managing uncertain risks and Congress should consider codifying them into law.

Should we be concerned about the future of artificial intelligence?

Australian Broadcasting Corporation
| February 22, 2024

In an Australian Broadcasting Corporation 7.30 segment that discusses the concerns about the current understanding of AI technology, Helen Toner provided her expert insights.

Which Ties Will Bind?

Sam Bresnick, Ngor Luong, and Kathleen Curlee
| February 2024

U.S. technology companies have become important actors in modern conflicts, and several of them have meaningfully contributed to Ukraine’s defense. But many of these companies are deeply entangled with China, potentially complicating their decision-making in a potential Taiwan contingency.

In an article published by TIME, Rita Konaev provided her expert insights on the involvement of tech giants in the Russia-Ukraine War.

CSET experts discussed the U.S. tech industry's involvement in the war in Ukraine and what it means for their role in conflicts of the future.

In an op-ed article published in Lawfare, CSET’s Lauren Kahn discusses the increasing integration of Artificial Intelligence (AI) in military operations globally and the need for effective governance to avoid potential mishaps and escalation.

CSET’s Must Read Research: A Primer

Tessa Baker
| December 18, 2023

This guide provides a run-down of CSET’s research since 2019 for first-time visitors and long-term fans alike. Quickly get up to speed on our “must-read” research and learn about how we organize our work.

In an op-ed published in Breaking Defense, CSET Space Force Fellow, Michael O'Connor, delves into the increasing concerns about AI safety and the need for rigorous safety testing before AI systems are deployed.

Commentary: Balancing AI Governance with Opportunity

Jaret C. Riddick
| November 30, 2023

On October 30, 2023, the Biden administration released the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI). This blog contemplates a potential lack of balance in the policy discussions between the restrictions that come with governance and the promise of opportunity unleashed by innovation. Is it possible to apply AI standards, governance, safeguards and protections in a manner that stimulates innovation, competitiveness and global leadership for the United States in AI?

There’s a lot to digest in the October 30 White House’s AI Executive Order. Our tracker is a useful starting point to identify key provisions and monitor the government’s progress against specific milestones, but grappling with the substance is an entirely different matter. This blog post, focusing on Section 4 of the EO (“Developing Guidelines, Standards, and Best Practices for AI Safety and Security”), is the first in a series that summarizes interesting provisions, shares some of our initial reactions, and highlights some of CSET’s research that may help the USG tackle the EO.