Applications

Decoding Intentions

Andrew Imbrie, Owen Daniels, and Helen Toner
| October 2023

How can policymakers credibly reveal and assess intentions in the field of artificial intelligence? Policymakers can send credible signals of their intent by making pledges or committing to undertaking certain actions for which they will pay a price—political, reputational, or monetary—if they back down or fail to make good on their initial promise or threat. Talk is cheap, but inadvertent escalation is costly to all sides.

CSET hosted a discussion on the Department of Defense's recently announced Replicator initiative to field thousands of small, low-cost, autonomous systems.

The Inigo Montoya Problem for Trustworthy AI (International Version)

Emelia Probasco and Kathleen Curlee
| October 2023

Australia, Canada, Japan, the United Kingdom, and the United States emphasize principles of accountability, explainability, fairness, privacy, security, and transparency in their high-level AI policy documents. But while the words are the same, these countries define each of these principles in slightly different ways that could have large impacts on interoperability and the formulation of international norms. This creates, what we call the “Inigo Montoya problem” in trustworthy AI, inspired by "The Princess Bride" movie quote: “You keep using that word. I do not think it means what you think it means.”

In a commentary published by Nature, Josh A. Goldstein and Zachary Arnold, along with co-authors, explore how artificial intelligence, including large language models like ChatGPT, can enhance science advice for policymaking.

Replicator: A Bold New Path for DoD

Michael O’Connor
| September 18, 2023

The Replicator effort by the U.S. Department of Defense (DoD) is intended to overcome some of the military challenges posed by China’s People’s Liberation Army (PLA). This blog post intends to both identify tradeoffs for the Department to consider as it charts the path for Replicator, and to provide a sense for the state of industry readiness to support.

Recent announcements from both Pentagon and Congressional leaders offer significant opportunity for rapidly delivering autonomous systems technology at-scale for U.S. Warfighters well into the future. Dr. Jaret Riddick, CSET Senior Fellow and former Principal Director for Autonomy in USD(R&E) offers his perspective on DOD’s Replicator Initiative and recent legislative proposals about DOD autonomy.

CSET's Anna Puglisi was featured in a Fox News article that discusses the recent Senate Energy Committee hearing. The hearing highlighted both the potential threats and opportunities associated with the integration of artificial intelligence (AI) into the U.S. energy sector and daily life. Concerns were raised about China's AI advancements and the absence of a strategic AI plan in the U.S. Puglisi emphasized the need for updated policies to tackle challenges posed by China and global players in academia and research.

In a PBS broadcast, CSET's Anna Puglisi shared her insights at the Senate Energy Committee hearing held on Thursday about advances in artificial intelligence and U.S. technological competitiveness.

In their op-ed featured in Foreign Affairs, CSET’s Margarita Konaev and Owen J. Daniels shared their expert analysis in an op-ed published by Foreign Affairs. The op-ed delves into Ukraine's summer counteroffensive against Russia and the obstacles it has faced, while also acknowledging the resilience and adaptability of the Ukrainian military amidst the counteroffensive's challenges.

In an op-ed published in Breaking Defense, CSET's Owen J. Daniels discusses the potential impact of artificial intelligence (AI) on modern warfare and emphasizes the need for purposeful harnessing of AI by the Pentagon.