This is the third blog post in an ongoing series that we will post as we dig deeper into the White House’s AI EO.
The recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence unsurprisingly contains numerous new initiatives related to AI, but what might be surprising is the inclusion of DNA synthesis and other biosecurity-related provisions.
The inclusion of biosecurity provisions in an EO ostensibly about AI shows that the White House considers biorisk a major concern for AI safety and security. In particular, models that could facilitate the development, acquisition, and use of biological weapons are called out in sections outlining AI capability assessments (4.1), reporting requirements (4.2), security reviews of federal data (4.7), and consumer protections (8.b). Notably, the compute threshold for mandated reporting is lower for models trained on biological data, meaning that these systems will have more oversight than more general-purpose systems (4.2.b.i).
Section 4.4, entirely dedicated to limiting biological risk, is the most consequential to biotechnology and will have major implications for biological research and development. It requires the USG to undertake risk assessments to evaluate the potential for AI to increase biological risk, particularly its potential misuse to develop biological weapons. It also creates a new DNA synthesis screening regulation for federally-funded biological research.
Breaking Down the Bio-Provisions of Section 4.4
Below, we identify the major bio-relevant takeaways of the executive order section 4.4, provide some useful context, and identify questions that we have about its implementation. For further detail on the executive order, including information on the timelines for these actions and the agencies tasked with the responsibility, make sure to check out the CSET EO Task Tracker and our broader discussion of Section 4 of the EO.
Biorisk Impact Assessments
Researchers are increasingly developing AI models for biology that can accelerate research and development timelines. The two general categories of AI tools for biology—chatbots and biological design tools (BDTs)—each present unique capabilities and potential risks. Malicious actors might use chatbots to gather information and develop a plan of harm, although this risk should be evaluated in the context of the existing risk landscape. BDTs, like AlphaFold, engineer, predict, and simulate biological molecules and processes and can help researchers understand large-scale biological patterns. These models could be exploited to design new pathogens or toxins or to evade screening and detection measures—for example, by slightly modifying a sequence of nucleotides to enhance risk, but not match with a prohibited sequence.
Biological AI tools are powered by large datasets of biological information. The U.S. government owns, partially funds, or facilitates contributions to many of these biological databases, including some of the largest repositories of DNA sequences, protein structures, and chemical properties.
Executive Order Section 4.4a
The executive order commissions two efforts to better understand how AI can both exacerbate and mitigate biorisks, and provide recommendations accordingly:
- Report to the President: The Department of Homeland Security (DHS) is tasked with assessing the potential for AI to enhance Chemical, Biological, Radiological, and Nuclear (CBRN) threats through consultation with experts (4.4.a.i). In particular, the report is meant to:
- Identify which types of AI models present the biggest risks, and
- Include recommendations for regulation, oversight, and potential safety evaluation requirements.
- National Academies’ Study: The Department of Defense (DoD) is asked to contract the National Academies of Sciences, Engineering, and Medicine (NASEM) to conduct a study to evaluate AI’s impact on biorisk and provide recommendations (4.4.a.ii). The study should examine the risks from generative AI models trained on biological data, how these models can be used to reduce biorisk, the national security implications of AI trained on U.S. government-owned datasets, and any other aspects of AI applied to synthetic biology that the Secretary of Defense deems worthy of additional scrutiny.
Remaining Questions
- Regulations already exist to limit research that could make pathogens or toxins more dangerous. However, these frameworks do not adequately define what characteristics constitute a concerning level of risk. What framework will experts use to evaluate the level of biorisk with and without AI tools?
- The executive order specifies that AI experts and CBRN experts should be consulted to inform the report to the President on AI biorisks. Will non-CBRN biology experts (i.e. life scientists) also be involved?
- Will the potential benefits to biological and medical research of access to large, government-owned biological datasets be considered in addition to national security concerns?
DNA Synthesis Screening
Nucleic acid synthesis screening regulates the flow of potentially risky, lab-created custom nucleic acids—like DNA or RNA—that could be used to make pathogens or toxins. Researchers frequently use custom strands of DNA for a range of research applications, and can obtain them by ordering from commercial providers who synthesize the DNA and send it back by mail. Although synthesized DNA is important for basic research, certain sequences can cause harm if misused. Malicious actors could order DNA that codes for a toxin or a gene that makes a pathogen more dangerous, and some viruses can be completely reconstructed from their DNA or RNA.
Until now, DNA synthesis screening has been voluntary for commercial providers, although biosecurity experts including the National Science Advisory Board for Biosecurity (NSABB) have been recommending mandatory sequence and customer screening for decades. The U.S. Department of Health and Human Services developed a recommended screening framework for providers in 2010 and updated it in 2023, but these guidelines were not binding or enforceable. Most, but not all, commercial providers are committed to screening through membership in the industry-led International Gene Synthesis Consortium (IGSC).
Executive Order Section 4.4b
The executive order requires the development of a framework to screen nucleic acid synthesis and makes procurement from companies that screen a condition of federal research funding.
- Framework Specifications: The Director of OSTP will lead efforts to develop a screening framework, which should include screening criteria and methods, standards, and a reporting mechanism for concerning orders, using existing guidance like the HHS screening framework (4.4.b.i). Agencies should seek input from industry and relevant stakeholders regarding implementation and best practices, especially as it concerns screening specifications, database management, technical implementation, and conformity assessments (4.4.b.ii).
- Implementation and Evaluation: The framework will require all research projects that receive federal research funding to purchase nucleic acids from companies that adhere to the screening framework (4.4.b.iii). The Secretary of Homeland Security will consult with relevant agencies to develop a structured testing and evaluation framework and submit an annual report detailing results and recommendations to strengthen screening (4.4.b.iv).
Remaining Questions
- Screening Criteria:
- What DNA sequences will be included in the framework, and how will they be chosen? List-based approaches compare DNA orders against a list of “sequences of concern,” but are likely to be incomplete and can be evaded by capable actors.
- Will there be a mechanism to update the framework as technology evolves and as the stress testing required in 4.4.b.iv identifies vulnerabilities?
- Will benchtop DNA synthesizers—another way to generate custom DNA—be included in the stated frameworks, as recommended by the 2023 HHS guidance?
- Industry Role:
- Framework establishment (4.4.b.i) and industry engagement (4.4.b.ii) are slated to occur concurrently within 180 days. How will industry input be integrated into framework development?
- Will industry feedback be incorporated at any stage after framework development, for example during implementation or stress testing?
Concluding Thoughts
We are glad to see the EO address biosecurity and outline a foundation to assess and mitigate biorisks. We are particularly excited to see a requirement for nucleic acid synthesis screening, an important safeguard that experts have been recommending for nearly two decades. The screening mandate’s use of federal research funding as its policy lever will capture most biological research activity, but will not impact lone malicious actors who do not receive federal funding or use it to support their malicious activity. If this is a scenario of concern, then complete risk mitigation will require further legislative action to mandate universal screening.
The impacts of AI systems on biorisk are still unclear, and the actions put into motion by this executive order are a step in the right direction. We look forward to the positive impact that the executive order will have by addressing both foundational and AI-enhanced biosecurity risks.