In his op-ed in TIME, Jack Corrigan discusses the landmark antitrust ruling that officially named Google a monopoly, marking the first significant antitrust defeat for a major internet platform in over two decades.
Since 2022, U.S. export controls have restricted the highest-performing AI chips from being exported to China. The Biden administration likely did not intend to control CPUs (i.e., general-purpose processors) with these restrictions. However, CPUs are increasingly subject to export controls because chip designers are incorporating specialized elements for AI computation into CPUs. In this blog post, we discuss the implications of controlling AI-capable CPUs and make recommendations for the Bureau of Industry and Security (BIS) at the U.S. Department of Commerce.
In their article featured in the Council of Foreign Relations, Jack Corrigan and Owen J. Daniels provide their expert analysis on the Chevron Doctrine Supreme Court decision and its implications for artificial intelligence (AI) governance.
A CSET Data Snapshot was cited in an article published by The Wall Street Journal. The piece discusses Huawei's advancements in developing a new AI chip, the Ascend 910C, which positions the company to challenge U.S. tech giant Nvidia in the Chinese market.
Jack Corrigan, Owen Daniels, Lauren Kahn, and Danny Hague
| July 2024
A core question in policy debates around artificial intelligence is whether federal agencies can use their existing authorities to govern AI or if the government needs new legal powers to manage the technology. The authors argue that relying on existing authorities is the most effective approach to promoting the safe development and deployment of AI systems, at least in the near term. This report outlines a process for identifying existing legal authorities that could apply to AI and highlights areas where additional legislative or regulatory action may be needed.
In their op-ed featured in Fortune, Dewey Murdick and Owen J. Daniels provide their expert analysis on the Chevron Doctrine Supreme Court decision and its implications for artificial intelligence (AI) governance.
How to govern artificial intelligence is a concern that is rightfully top of mind for lawmakers and policymakers.To govern AI effectively, regulators must 1) know the terrain of AI risk and harm by tracking incidents and collecting data; 2) develop their own AI literacy and build better public understanding of the benefits and risks; and 3) preserve adaptability and agility by developing policies that can be updated as AI evolves.
In an article published by The Economist that discusses the adoption of advanced technology and artificial intelligence in militaries, CSET Research Fellow, Sam Bresnick, provided his expert insights.
This website uses cookies.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.