In their op-ed featured in MIT Technology Review, Josh A. Goldstein and Renée DiResta provide their expert analysis on OpenAI's first report on the misuse of its generative AI.
In a KCBS Radio segment that explores the rapid rise of AI and its potential impact on the 2024 election, CSET's Josh Goldstein provides his expert insights.
WIRED highlighted CSET Research Analyst Micah Musser in an article that references a report published by CSET, in collaboration with OpenAI and Stanford Internet Observatory. The report examines the potential misuse of language models in influence operations in the future and offers a framework for evaluating potential countermeasures.
Josh A. Goldstein, Girish Sastry, Micah Musser, Renée DiResta, Matthew Gentzel, and Katerina Sedova
| January 2023
Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI, and the Stanford Internet Observatory explore how language models could be misused for influence operations in the future, and provide a framework for assessing potential mitigation strategies.
In an interview with CyberScoop, Research Fellow Josh A. Goldstein discussed his research, in collaboration with Open AI and Stanford's Internet Observatory, on the use of large language models to deploy propaganda.
Josh A. Goldstein, Girish Sastry, Micah Musser, Renée DiResta, Matthew Gentzel, and Katerina Sedova
| January 2023
Machine learning advances have powered the development of new and more powerful generative language models. These systems are increasingly able to write text at near human levels. In a new report, authors at CSET, OpenAI, and the Stanford Internet Observatory explore how language models could be misused for influence operations in the future, and they provide a framework for assessing potential mitigation strategies.
In a piece examining Google's work on various AI projects, Axios highlights the potential for AI to turbocharge disinformation campaigns and cites CSET's work examining this possibility.
In his testimony before the House of Representatives Subcommittee on Cybersecurity, Infrastructure Protection, and Innovation, Senior Fellow Andrew Lohn discussed the application of AI systems in cybersecurity and AI’s vulnerabilities.
CSET Senior Fellow Andrew Lohn testified before the House of Representatives Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection, and Innovation at a hearing on "Securing the Future: Harnessing the Potential of Emerging Technologies While Mitigating Security Risks." Lohn discussed the application of AI systems in cybersecurity and AI’s vulnerabilities.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.