Heather Frase, PhD was a Senior Fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where she worked on AI Assessment. She also serves as an unpaid advisor to Meta’s Open Loop project, providing expertise on implementation of the National Institute for Standards and Technology’s AI Risk Management Framework. Prior to joining CSET, Heather spent eight years providing data analytics, computational modeling, Machine Learning (ML), and Artificial Intelligence (AI) support for Intelligence, Defense, and Federal contracts. Additionally, Heather spent 14 years at the Institute for Defense Analyses (IDA), supporting Director, Operational Test and Evaluation (DOT&E). At IDA she led analytic research teams to apply scientific, technological, and statistical expertise to develop data metrics and collection plans for operational tests of major defense systems, analyze test data, and produce assessments of operational effectiveness and suitability. She has a Ph.D. in Material Science from the California Institute of Technology and a BS in Physics from Miami University in Oxford Ohio.

Related Content

Artificial Intelligence incidents have been occurring with the rapid advancement of AI capabilities over the past decade. However, there is not yet a concerted policy effort in the United States to monitor, document, and aggregate… Read More

On February 2, 2024, CSET's Assessment and CyberAI teams submitted a response to NIST's Request for Information related to the Executive Order Concerning Artificial Intelligence (88 FR 88368). In the submission, CSET compiles recommendations from… Read More

In their recent article published by the Brennan Center for Justice, CSET's Heather Frase and Mia Hoffman, along with Edgardo Cortés and Lawrence Norden from the Brennan Center, delve into the growing role of artificial… Read More

Standards enable good governance practices by establishing consistent measurement and norms for interoperability, but creating standards for AI is a challenging task. The Center for Security and Emerging Technology and the Center for a New… Read More

This explainer defines criteria for effective AI Incident Collection and identifies tradeoffs between potential reporting models: mandatory, voluntary, and citizen reporting. Read More

As policymakers decide how best to regulate AI, they first need to grasp the different types of harm that various AI applications might cause at the individual, national, and even societal levels. To better understand… Read More

Analysis

Adding Structure to AI Harm

July 2023

Real-world harms caused by the use of AI technologies are widespread. Tracking and analyzing them improves our understanding of the variety of harms and the circumstances that lead to their occurrence once AI systems are… Read More

CSET's AI Assessment team provides a template that helps organizations create profiles to guide the management and deployment of AI systems in line with NIST's AI Risk Management Framework. Read More

Artificial intelligence systems are rapidly being deployed in all sectors of the economy, yet significant research has demonstrated that these systems can be vulnerable to a wide array of attacks. How different are these problems… Read More

Analysis

One Size Does Not Fit All

February 2023

Artificial intelligence is so diverse in its range that no simple one-size-fits-all assessment approach can be adequately applied to it. AI systems have a wide variety of functionality, capabilities, and outputs. They are also created… Read More