In their op-ed featured in The Hill, CSET's Dewey Murdick and Jack Corrigan provide expert analysis on the rapid emergence of generative artificial intelligence tools like OpenAI's ChatGPT and Google's Bard. The piece delves into the growing concerns among leaders in government, industry, and academia regarding the control of the development and utilization of this emerging technology and provide solutions to address it.
In a recent Bloomberg article, CSET's Helen Toner provides her expert analysis on Beijing's implementation of fresh regulations governing artificial intelligence (AI) services.
CSET's Ngor Luong provided her expert analysis in an article published by Fox Business. The article delves into the recently proposed restrictions on U.S. investments in China's technology sector by the Biden administration.
As policymakers decide how best to regulate AI, they first need to grasp the different types of harm that various AI applications might cause at the individual, national, and even societal levels. To better understand AI harm, this blog post presents some key components and characteristics.
In a thought-provoking op-ed featured in Lawfare, CSET's Zachary Arnold and Micah Musser delve into the dynamic discourse surrounding the regulation of artificial intelligence (AI).
Real-world harms caused by the use of AI technologies are widespread. Tracking and analyzing them improves our understanding of the variety of harms and the circumstances that lead to their occurrence once AI systems are deployed.
This report presents a standardized conceptual framework for defining, tracking, classifying, and understanding harms caused by AI. It lays out the key elements required for the identification of AI harm, their basic relational structure, and definitions without imposing a single interpretation of AI harm. The brief concludes with an example of how to apply and customize the framework while keeping its modular structure.
Collaborations between researchers and policymakers are necessary for progress, but can be challenging in practice. This blog post reports on recent discussions by privacy experts on the obstacles they face when engaging in the policy space and advice on how to improve these barriers.
On July 21, the White House announced voluntary commitments from seven AI firms to ensure safe, secure, and transparent AI. CSET’s research provides important context to this discussion.
With the rapid integration of AI into our daily lives, we must all learn when and whether to trust the technology, understand its capabilities and limitations, and adapt as these systems — and our functional relationships with them — evolve.
In a BBC article that discusses the urgent need to integrate cybersecurity measures into artificial intelligence systems, CSET's Andrew Lohn provided his expert analysis.
This website uses cookies.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.