Despite recent upheaval in the AI policy landscape, AI evaluations—including AI red-teaming—will remain fundamental to understanding and governing the usage of AI systems and their impact on society. This blog post draws from a December 2024 CSET workshop on AI testing to outline challenges associated with improving red-teaming and suggest recommendations on how to address those challenges.
As new advanced AI systems roll out, there is widespread disagreement about malicious use risks. Are bad actors likely to misuse these tools for harm? This report presents a simple framework to guide the questions researchers ask—and the tools they use—to evaluate the likelihood of malicious use.
In response to the Office of Science and Technology Policy's request for input on an AI Action Plan, CSET provides key recommendations for advancing AI research, ensuring U.S. competitiveness, and maximizing benefits while mitigating risks. Our response highlights policies to strengthen the AI workforce, secure technology from illicit transfers, and foster an open and competitive AI ecosystem.
In his Tech Policy Press op-ed, Josh A. Goldstein discusses Meta's quarterly threat report, which highlights the discovery of five networks of fake accounts from Moldova, Iran, Lebanon, and two from India attempting to manipulate public debate.
Jessica Ji, Jenny Jun, Maggie Wu, and Rebecca Gelles
| November 2024
Artificial intelligence models have become increasingly adept at generating computer code. They are powerful and promising tools for software development across many industries, but they can also pose direct and indirect cybersecurity risks. This report identifies three broad categories of risk associated with AI code generation models and discusses their policy and cybersecurity implications.
Mia Hoffmann provided her expert insights in an article published by TIME. The article discusses concerns about artificial intelligence (AI) affecting the 2024 U.S. elections through misinformation and deepfakes.
This blog describes key takeaways from the NATO-Ukraine Defense Innovators Forum, held in Krakow, Poland in June 2024. It overviews changing concepts of operation, battlefield realities, and technological aspirations and innovations in Ukraine, with a focus on uncrewed aerial vehicles (UAVs) and counter-UAV systems. It builds upon CSET’s previous blog from the Future of Drones in Ukraine conference held in Warsaw in November 2023.
In their op-ed in Foreign Policy, Josh A. Goldstein and Renée DiResta discuss recent efforts by the U.S. government to disrupt Russian influence operations, highlighting how Russia uses fake domains, media outlets, and social media influencers to manipulate global public conversations.
Helen Toner, John Bansemer, Kyle Crichton, Matthew Burtell, Thomas Woodside, Anat Lior, Andrew Lohn, Ashwin Acharya, Beba Cibralic, Chris Painter, Cullen O’Keefe, Iason Gabriel, Kathleen Fisher, Ketan Ramakrishnan, Krystal Jackson, Noam Kolt, Rebecca Crootof, and Samrat Chatterjee
| October 2024
Computer scientists have long sought to build systems that can actively and autonomously carry out complicated goals in the real world—commonly referred to as artificial intelligence "agents." Recently, significant progress in large language models has fueled new optimism about the prospect of building sophisticated AI agents. This CSET-led workshop report synthesizes findings from a May 2024 workshop on this topic, including what constitutes an AI agent, how the technology is improving, what risks agents exacerbate, and intervention points that could help.
Kyle Crichton, Jessica Ji, Kyle Miller, John Bansemer, Zachary Arnold, David Batz, Minwoo Choi, Marisa Decillis, Patricia Eke, Daniel M. Gerstein, Alex Leblang, Monty McGee, Greg Rattray, Luke Richards, and Alana Scott
| October 2024
As critical infrastructure operators and providers seek to harness the benefits of new artificial intelligence capabilities, they must also manage associated risks from both AI-enabled cyber threats and potential vulnerabilities in deployed AI systems. In June 2024, CSET led a workshop to assess these issues. This report synthesizes our findings, drawing on lessons from cybersecurity and insights from critical infrastructure sectors to identify challenges and potential risk mitigations associated with AI adoption.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities...
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.