Assessment

Into the Minds of China’s Military AI Experts

Foreign Policy
| July 18, 2024

In his Foreign Policy op-ed, Sam Bresnick discusses the significance of the first U.S.-China dialogue on AI military risks.

Join us for the next session of our Security and Emerging Technology Seminar Series on August 1 at 12 p.m. ET. This session will feature a discussion on the President’s Council of Advisors on Science and Technology (PCAST) Report on Strategy for Cyber-Physical Resilience.

Comment on Commerce Department RFI 89 FR 27411

Catherine Aiken James Dunham Jacob Feldgoise Rebecca Gelles Ronnie Kinoshita Mina Narayanan Christian Schoeberl
| July 16, 2024

CSET submitted the following comment in response to a Request for Information (RFI) from the Department of Commerce regarding 89 FR 27411.

In their op-ed featured in Fortune, Dewey Murdick and Owen J. Daniels provide their expert analysis on the Chevron Doctrine Supreme Court decision and its implications for artificial intelligence (AI) governance.

Enabling Principles for AI Governance

Owen Daniels Dewey Murdick
| July 2024

How to govern artificial intelligence is a concern that is rightfully top of mind for lawmakers and policymakers.To govern AI effectively, regulators must 1) know the terrain of AI risk and harm by tracking incidents and collecting data; 2) develop their own AI literacy and build better public understanding of the benefits and risks; and 3) preserve adaptability and agility by developing policies that can be updated as AI evolves.

How AI is changing warfare

The Economist
| June 20, 2024

In an article published by The Economist that discusses the adoption of advanced technology and artificial intelligence in militaries, CSET Research Fellow, Sam Bresnick, provided his expert insights.

In his op-ed featured in the Bulletin of the Atomic Scientists, Owen J. Daniels provides his expert analysis of California’s latest AI Bill, SB 1047.

Policy and research communities strive to mitigate AI harm while maximizing its benefits. Achieving effective and trustworthy AI necessitates the establishment of a shared language. The analysis of policies across different countries and research literature identifies consensus on six critical concepts: accountability, explainability, fairness, privacy, security, and transparency.

This paper is the fifth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. This paper explores the opportunities and challenges of building AI systems that “know what they don’t know.”

China has become a scientific superpower

The Economist
| June 12, 2024

In an article published by The Economist that discusses the rapid growth and achievements of Chinese scientific research, CSET ETO Analytic Lead, Zachary Arnold, provided his expert insights.