A report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory was cited in an article published by The New York Times about the potential dangers of AI-powered chatbots.
A report by CSET’s Emily S. Weinstein and Ngor Luong, was cited in an article published by Roll Call. The report identifies the main U.S. investors active in the Chinese artificial intelligence market and the set of AI companies in China that have benefitted from U.S. capital.
A report by CSET's Emily S. Weinstein and Ngor Luong, was cited in an article published by Foreign Policy. This report focuses on the American investors who are primarily involved in investing in Chinese artificial intelligence companies.
Reuters cited a report by Emily S. Weinstein and Ngor Luong from CSET. The report focuses on identifying the primary American investors involved in the Chinese artificial intelligence market and highlights the list of AI companies in China that have received investments from the United States.
Forbes referred to a report authored by CSET alumni Diana Gehlhaus and Santiago Mutis. The report delves into the domestic AI workforce, providing an initial evaluation of its makeup, size, and essential features.
A report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory was cited in an article published by Grid. The report examines the potential misuse of language models for influence operations in the future and proposes a structure for evaluating possible solutions to this problem.
A report by CSET’s Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in collaboration with OpenAI and Stanford Internet Observatory was cited in an article published on Medium. The report explores how language models could be misused for influence operations in the future, and it provides a framework for assessing potential mitigation strategies.
In an interview with CyberScoop, Research Fellow Josh A. Goldstein discussed his research, in collaboration with Open AI and Stanford's Internet Observatory, on the use of large language models to deploy propaganda.
This website uses cookies.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.