As policymakers decide how best to regulate AI, they first need to grasp the different types of harm that various AI applications might cause at the individual, national, and even societal levels. To better understand AI harm, this blog post presents some key components and characteristics.
In a thought-provoking op-ed featured in Lawfare, CSET's Zachary Arnold and Micah Musser delve into the dynamic discourse surrounding the regulation of artificial intelligence (AI).
Two CSET researchers are coauthors for a new multi-organization report about the safety of AI systems led by OpenAI and the Berkeley Risk and Security Lab. The report, published on arXiv, identified six confidence-building measures (CBMs) that could be applied by AI labs to reduce hostility, prevent conflict escalation, and improve trust between parties as it relates to foundation AI models.
CSET submitted the following comment in response to a Request for Information (RFI) from the National Science Foundation (NSF) about the development of the newly established Technology, Innovation, and Partnerships (TIP) Directorate, in accordance with the CHIPS and Science Act of 2022.
Real-world harms caused by the use of AI technologies are widespread. Tracking and analyzing them improves our understanding of the variety of harms and the circumstances that lead to their occurrence once AI systems are deployed.
This report presents a standardized conceptual framework for defining, tracking, classifying, and understanding harms caused by AI. It lays out the key elements required for the identification of AI harm, their basic relational structure, and definitions without imposing a single interpretation of AI harm. The brief concludes with an example of how to apply and customize the framework while keeping its modular structure.
On July 21, the White House announced voluntary commitments from seven AI firms to ensure safe, secure, and transparent AI. CSET’s research provides important context to this discussion.
With the rapid integration of AI into our daily lives, we must all learn when and whether to trust the technology, understand its capabilities and limitations, and adapt as these systems — and our functional relationships with them — evolve.
In an article published by The Washington Post that discusses the competition between the United States and China in the field of artificial intelligence and the differing approaches of regulation in both countries, CSET's Helen Toner provided her expert insight.
In an article published by The New York Times that discusses the increasing use of artificial intelligence in political campaigns and the concerns it raises regarding disinformation and manipulation, CSET's Josh A. Goldstein provides his expert insight.
CSET's AI Assessment team provides a template that helps organizations create profiles to guide the management and deployment of AI systems in line with NIST's AI Risk Management Framework.
This website uses cookies.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.