Assessment

This explainer defines criteria for effective AI Incident Collection and identifies tradeoffs between potential reporting models: mandatory, voluntary, and citizen reporting.

CSET's Anna Puglisi was featured in a Fox News article that discusses the recent Senate Energy Committee hearing. The hearing highlighted both the potential threats and opportunities associated with the integration of artificial intelligence (AI) into the U.S. energy sector and daily life. Concerns were raised about China's AI advancements and the absence of a strategic AI plan in the U.S. Puglisi emphasized the need for updated policies to tackle challenges posed by China and global players in academia and research.

In an article published by The New York Times, CSET's Executive Director Dewey Murdick provided insights into the challenges of regulating rapidly evolving artificial intelligence (A.I) technology.

In an op-ed featured in Barron's, CSET's Emily S. Weinstein discusses the recent proposed regulations by the Biden administration to restrict U.S. investments in critical technology sectors in China. The regulations target advancements in semiconductors, microelectronics, quantum technologies, and AI systems, with concerns about potential military applications benefiting adversaries like China.

In their op-ed featured in The Hill, CSET's Dewey Murdick and Jack Corrigan provide expert analysis on the rapid emergence of generative artificial intelligence tools like OpenAI's ChatGPT and Google's Bard. The piece delves into the growing concerns among leaders in government, industry, and academia regarding the control of the development and utilization of this emerging technology and provide solutions to address it.

In a recent Bloomberg article, CSET's Helen Toner provides her expert analysis on Beijing's implementation of fresh regulations governing artificial intelligence (AI) services.

Understanding AI Harms: An Overview

Heather Frase Owen Daniels
| August 11, 2023

As policymakers decide how best to regulate AI, they first need to grasp the different types of harm that various AI applications might cause at the individual, national, and even societal levels. To better understand AI harm, the blog presents some key components and characteristics.

In a thought-provoking op-ed featured in Lawfare, CSET's Zachary Arnold and Micah Musser delve into the dynamic discourse surrounding the regulation of artificial intelligence (AI).

Two CSET researchers are coauthors for a new multi-organization report about the safety of AI systems led by OpenAI and the Berkeley Risk and Security Lab. The report, published on arXiv, identified six confidence-building measures (CBMs) that could be applied by AI labs to reduce hostility, prevent conflict escalation, and improve trust between parties as it relates to foundation AI models.

CSET submitted the following comment in response to a Request for Information (RFI) from the National Science Foundation (NSF) about the development of the newly established Technology, Innovation, and Partnerships (TIP) Directorate, in accordance with the CHIPS and Science Act of 2022.