Today’s announcement by the Biden Administration about voluntary commitments by OpenAI, Anthropic, Google, Microsoft, and other AI labs to manage the promises and risks of AI is a step in the right direction towards improved AI safety, security, and transparency. Over the years, CSET has developed deep expertise in these areas, providing some of the recommendations implemented today.
Related Work
Ensuring Products are Safe Before Introducing Them to the Public
- Helen Toner: Primer: Key Concepts in AI Safety and AI Accidents: An Emerging Threat: What Could Happen and What To Do
- Dr. Heather Frase: One Size Does Not Fit All (CSET’s AI Safety Research Agenda).
Building Systems that Put Security First
- John Bansemer: Securing AI Makes for Safer AI (Blog)
- Dr. Drew Lohn: Securing AI: How Traditional Vulnerability Disclosure Must Adapt
Earning the Public’s Trust
- Emmy Probasco: The Inigo Montoya Problem for Trustworthy AI and Who Cares About Trust?
- Dr. Rita Konaev: Trusted Partners: Human-Machine Teaming and the Future of Military AI
- Dr. Dewey Murdick: Building Trust in AI: A New Era of Human-Machine Teaming
- Dr. Josh Goldstein: Finding Language Models in Influence Operations
Talk to Our Experts
If you are a member of the media interested in discussing today’s announcement with one of the experts listed above, please contact tgb5@georgetown.edu.