In a weekly digest published by Foreign Policy, CSET's Emily S. Weinstein offered her expert analysis on a recent study conducted by the Australian Strategic Policy Institute.
Process frameworks provide a blueprint for organizations implementing responsible artificial intelligence (AI), but the sheer number of frameworks, along with their loosely specified audiences, can make it difficult for organizations to select ones that meet their needs. This report presents a matrix that organizes approximately 40 public process frameworks according to their areas of focus and the teams that can use them. Ultimately, the matrix helps organizations select the right resources for implementing responsible AI.
A report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory was cited in an article published by Forbes.
In an op-ed published in TIME, CSET's Helen Toner discusses the challenges of understanding and interacting with chatbots powered by large language models, a form of artificial intelligence.
The Messenger published an article featuring insights from CSET's Mina Narayanan. The article delves into the growing concerns surrounding the regulation of artificial intelligence and the challenges Congress faces in developing rules for its use.
CSET's Heather Frase was interviewed by Politico, and the discussion was published in their newsletter in a segment that discusses a plan by the U.S. government to conduct a public experiment at the DEFCON 31 hacking convention in August.
CSET's Heather Frase was quoted by the Associated Press in an article discussing the Biden administration's efforts to ensure the responsible development of AI.
The New York Times' Morning Newsletter cited CSET's Helen Toner in a piece about the rise of artificial intelligence and how it could become a regular part of our everyday lives.
CSET’s Heather Frase was interviewed by The Financial Times in an article about OpenAI's red team and their mission to test and mitigate the risks of GPT-4.
This website uses cookies.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.