An article on Interesting Engineering referenced a report by CSET's Josh Goldstein, Micah Musser, and CSET alumna Katerina Sedova in partnership with OpenAI and Stanford Internet Observatory. The report examines the potential misuse of language models in influence operations in the future and offers a framework for evaluating potential countermeasures.
In a report for the Observer Research Foundation, Research Analyst Husan Chahal writes about the ethics of artificial intelligence and how the multitude of efforts across such a diverse group of stakeholders reflects the need for guidance in AI development.
Ben BuchananAndrew LohnMicah MusserKaterina Sedova
| May 2021
Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.
To learn more, please review this policy. By continuing to browse the site, you agree to these terms.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.