Over the past year, artificial intelligence has quickly become a focal point in K-12 education. This blog post describes new and existing K-12 AI education efforts so that U.S. policymakers and other decision-makers may better understand what’s happening in practice.
How might AI impact the democratic process and how should policymakers respond? What steps can the media, AI providers, and social media companies take to help people find reliable information and recognize when content is AI-generated?
On April 10, CSET Research Fellow Josh Goldstein and a panel of outside experts discussed these and other challenges.
In an article published by the Financial Time exploring the rapid rise of AI-generated conspiracy theories and spam content on social media platforms, CSET's Josh A. Goldstein provided his expert insights.
In a new preprint paper, CSET's Josh A. Goldstein and the Stanford Internet Observatory's Renee DiResta explored the use of AI-generated imagery to drive Facebook engagement.
This blog post assesses how different priorities can change the risk-benefit calculus of open foundation models, and provides divergent answers to the question of “given current AI capabilities, what might happen if the U.S. government left the open AI ecosystem unregulated?” By answering this question from different perspectives, this blog post highlights the dangers of hastily subscribing to any particular course of action without weighing the potentially beneficial, risky, and ambiguous implications of open models.
Josh A. Goldstein, Jason Chao, Shelby Grossman, Alex Stamos, and Michael Tomz
| February 2024
Research participants who read propaganda generated by GPT-3 davinci (a large language model) were nearly as persuaded as those who read real propaganda from Iran or Russia, according to a new peer-reviewed study by Josh A. Goldstein and co-authors.
This data snapshot is the first in a series on CSET’s cybersecurity jobs data, a new dataset created by classifying data from 513 million LinkedIn user profiles. Here, we offer an overview of its creation and explore some use cases for analysis.
In an article published by the Brennan Center for Justice, Josh A. Goldstein and Andrew Lohn delve into the concerns about the spread of misleading deepfakes and the liar's dividend.
Strengthening the federal cyber workforce is one of the main priorities of the National Cyber Workforce and Education Strategy. The National Science Foundation’s CyberCorps Scholarship-for-Service program is a direct cyber talent pipeline into the federal workforce. As the program tries to satisfy increasing needs for cyber talent, it is apparent that some form of program expansion is needed. This policy brief summarizes trends from participating institutions to understand how the program might expand and illustrates a potential future artificial intelligence (AI) federal scholarship-for-service program.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities...
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.