Applications

In an op-ed published in Breaking Defense, CSET Space Force Fellow, Michael O'Connor, delves into the increasing concerns about AI safety and the need for rigorous safety testing before AI systems are deployed.

Commentary: Balancing AI Governance with Opportunity

Jaret C. Riddick
| November 30, 2023

On October 30, 2023, the Biden administration released the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI). This blog contemplates a potential lack of balance in the policy discussions between the restrictions that come with governance and the promise of opportunity unleashed by innovation. Is it possible to apply AI standards, governance, safeguards and protections in a manner that stimulates innovation, competitiveness and global leadership for the United States in AI?

There’s a lot to digest in the October 30 White House’s AI Executive Order. Our tracker is a useful starting point to identify key provisions and monitor the government’s progress against specific milestones, but grappling with the substance is an entirely different matter. This blog post, focusing on Section 4 of the EO (“Developing Guidelines, Standards, and Best Practices for AI Safety and Security”), is the first in a series that summarizes interesting provisions, shares some of our initial reactions, and highlights some of CSET’s research that may help the USG tackle the EO.

This trip report covers the major takeaways from the Future of Drones in Ukraine conference, co-hosted by the U.S. Defense Innovation Unit and Ukraine’s Brave1. It gives an overview of how drones are being deployed in Ukraine, the pace and scale of technological and operational innovation for inexpensive and commercially-available drones, and the future outlook for scaling in Ukraine but also with the U.S. Replicator initiative.

On October 30, 2023, the Biden administration released its long-awaited Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. CSET has broken down the EO, focusing on specific government deliverables. Our EO Provision and Timeline tracker lists which agencies are responsible for actioning EO provisions and their deadlines.

Working Through Our Global AI Trust Issues

Kathleen Curlee
| November 1, 2023

The use of the word “trustworthy” in relation to AI has sparked debate among policymakers and experts alike. This blog post explores different understandings of trustworthy AI among international actors, as well as challenges in establishing an international trustworthy AI consensus.

Decoding Intentions

Andrew Imbrie Owen Daniels Helen Toner
| October 2023

How can policymakers credibly reveal and assess intentions in the field of artificial intelligence? Policymakers can send credible signals of their intent by making pledges or committing to undertaking certain actions for which they will pay a price—political, reputational, or monetary—if they back down or fail to make good on their initial promise or threat. Talk is cheap, but inadvertent escalation is costly to all sides.

CSET hosted a discussion on the Department of Defense's recently announced Replicator initiative to field thousands of small, low-cost, autonomous systems.

The Inigo Montoya Problem for Trustworthy AI (International Version)

Emelia Probasco Kathleen Curlee
| October 2023

Australia, Canada, Japan, the United Kingdom, and the United States emphasize principles of accountability, explainability, fairness, privacy, security, and transparency in their high-level AI policy documents. But while the words are the same, these countries define each of these principles in slightly different ways that could have large impacts on interoperability and the formulation of international norms. This creates, what we call the “Inigo Montoya problem” in trustworthy AI, inspired by "The Princess Bride" movie quote: “You keep using that word. I do not think it means what you think it means.”

In a commentary published by Nature, Josh A. Goldstein and Zachary Arnold, along with co-authors, explore how artificial intelligence, including large language models like ChatGPT, can enhance science advice for policymaking.