CyberAI

In a new preprint paper, CSET's Josh A. Goldstein and the Stanford Internet Observatory's Renee DiResta explored the use of AI-generated imagery to drive Facebook engagement.

This blog post assesses how different priorities can change the risk-benefit calculus of open foundation models, and provides divergent answers to the question of “given current AI capabilities, what might happen if the U.S. government left the open AI ecosystem unregulated?” By answering this question from different perspectives, this blog post highlights the dangers of hastily subscribing to any particular course of action without weighing the potentially beneficial, risky, and ambiguous implications of open models.

How Persuasive is AI-Generated Propaganda?

Josh A. Goldstein, Jason Chao, Shelby Grossman, Alex Stamos, and Michael Tomz
| February 2024

Research participants who read propaganda generated by GPT-3 davinci (a large language model) were nearly as persuaded as those who read real propaganda from Iran or Russia, according to a new peer-reviewed study by Josh A. Goldstein and co-authors.

Introducing the Cyber Jobs Dataset

Maggie Wu
| February 6, 2024

This data snapshot is the first in a series on CSET’s cybersecurity jobs data, a new dataset created by classifying data from 513 million LinkedIn user profiles. Here, we offer an overview of its creation and explore some use cases for analysis.

Deepfakes, Elections, and Shrinking the Liar’s Dividend

Brennan Center for Justice
| January 23, 2024

In an article published by the Brennan Center for Justice, Josh A. Goldstein and Andrew Lohn delve into the concerns about the spread of misleading deepfakes and the liar's dividend.

The Core of Federal Cyber Talent

Ali Crawford
| January 2024

Strengthening the federal cyber workforce is one of the main priorities of the National Cyber Workforce and Education Strategy. The National Science Foundation’s CyberCorps Scholarship-for-Service program is a direct cyber talent pipeline into the federal workforce. As the program tries to satisfy increasing needs for cyber talent, it is apparent that some form of program expansion is needed. This policy brief summarizes trends from participating institutions to understand how the program might expand and illustrates a potential future artificial intelligence (AI) federal scholarship-for-service program.

Join CSET researchers for a 90-day review of ongoing implementation of the Biden Administration's Executive Order on artificial intelligence.

Scaling AI

Andrew Lohn
| December 2023

While recent progress in artificial intelligence (AI) has relied primarily on increasing the size and scale of the models and computing budgets for training, we ask if those trends will continue. Financial incentives are against scaling, and there can be diminishing returns to further investment. These effects may already be slowing growth among the very largest models. Future progress in AI may rely more on ideas for shrinking models and inventive use of existing models than on simply increasing investment in compute resources.

CSET’s Must Read Research: A Primer

Tessa Baker
| December 18, 2023

This guide provides a run-down of CSET’s research since 2019 for first-time visitors and long-term fans alike. Quickly get up to speed on our “must-read” research and learn about how we organize our work.

In a WIRED article discussing issues with Microsoft's AI chatbot providing misinformation, conspiracies, and outdated information in response to political queries, CSET's Josh A. Goldstein provided his expert insights.