In a recent article and broadcast story by NPR on the potential dangers of generative AI, CSET’s Josh Goldstein provided expert insight on the topic. NPR also referenced a report he co-authored.
Goldstein states, “Language models are a natural tool for propagandists.” He added that using a language model, propagandists can create lots of original text quickly and at little cost, which means that propaganda campaigns could be in reach for a larger variety of bad actors. Goldstein also noted that AI-generated content can be really convincing and that “You can generate persuasive propaganda, even if you’re not entirely fluent in English, or even if you don’t know the idioms of your target community.”
For the full story, visit NPR.