CSET’s Dewey Murdick and Owen J. Daniels shared their expert analysis in an op-ed published by Fortune. They discuss the Chevron Doctrine Supreme Court decision and its implications for artificial intelligence (AI) governance.
They write, “To rise to the challenge of AI governance in this new environment, the U.S. needs nimble, forward-thinking policies to protect against AI’s risks while promoting American innovation.”
Murdick and Daniels’ piece builds on their newly published CSET research, where they present three principles to enable an agile approach to AI governance.
- Know the terrain of Al risk and harm: Use incident tracking and horizon- scanning across industry, academia, and the government to understand the extent of Al risks and harms; gather supporting data to inform governance efforts and manage risk.
- Prepare humans to capitalize on Al: Develop Al literacy among policymakers and the public to be aware of Al opportunities, risks, and harms while employing Al applications effectively, responsibly, and lawfully.
- Preserve adaptability and agility: Develop policies that can be updated and adapted as Al evolves, avoiding onerous regulations or regulations that become obsolete with technological progress; ensure that legislation does not allow incumbent Al firms to crowd out new competitors through regulatory capture.
To read the full op-ed, visit Fortune.