Frontier AI capabilities are advancing rapidly, posing increasingly pressing national security risks, and showing no sign of slowing down so that governance can catch up. While comprehensive regulation or legislation may be years away, targeted government actions can improve AI preparedness in the near term. This blog post outlines relatively light-touch, low-cost measures in key areas that could offer practical benefits with minimal downside risk.
Given the need for near-term preparedness measures, a pragmatic policy approach is to build on existing efforts by the private sector. So far, frontier AI governance in the private sector primarily takes the form of voluntary preparedness frameworks, such as Anthropic’s Responsible Scaling Policy and Google DeepMind’s Frontier Safety Framework. These frameworks, which draw from research by civil society, describe how companies plan to manage risks from emerging AI capabilities. They are scoped to risks of severe harm; as a representative example, OpenAI’s Preparedness Framework focuses on risks from cybersecurity, biological, chemical, and AI self-improvement capabilities. In Seoul in 2024, leading AI companies signed onto the Frontier AI Safety Commitments: they promised to manage risks effectively, hold themselves accountable for safely developing and deploying their systems, and be transparent to external actors, including governments.
These are challenging goals, but companies need not take on these challenges alone. The U.S. government can and should assist with national-security-relevant AI preparedness work that it is especially well suited for. The government should take action in three key areas:
- Sharing national security expertise and intelligence with companies;
- Promoting transparency into frontier AI development; and
- Facilitating the development of best practices for risk management.
Contributing national security expertise
The U.S. government has access to intelligence and long-standing cybersecurity expertise that could benefit AI companies’ efforts to secure their systems and manage risks. Sharing relevant knowledge would support both AI companies and national security.
The most advanced AI models in the world, developed at great expense by American companies, are currently vulnerable to theft by the Chinese Communist Party (CCP). With stolen model weights, the CCP could rapidly adapt these models to their own ends. American AI infrastructure also faces threats from other state actors, as well as criminal and terrorist groups. Considering the high stakes, it makes sense that AI companies are calling for President Trump to confront “risks of criminal, terrorist, and state-sponsored misuse and industrial espionage” and “streamline industry engagement with national security departments and agencies.” Even developing the option to secure some model weights against cyber threats from state-level actors could take years, because it is a difficult technical challenge to protect models that are in large-scale active use by untrusted users. A coordinated multi-agency effort may be appropriate to bring the highest levels of security within reach for companies; the intelligence community (especially the National Security Agency) and the Center for AI Standards and Innovation (CAISI) should take the initiative to explore options.
While securing model weights presents a complex technical challenge, U.S. national security agencies can already partner with AI companies to share cyber threat intelligence and cybersecurity best practices. Bidirectional information sharing between private companies and executive agencies is a well-established best practice. Some government-held intelligence may be too sensitive to share when it could reveal classified sources or methods. However, agencies can still share best practices from their extensive experience operating facilities at various levels of cybersecurity. AI companies also hold valuable threat intelligence that could inform agencies; for example, Microsoft partners with OpenAI to monitor attack activity and improve defenses. There are multiple opportunities to expand existing efforts to support cyber information sharing. The Joint Cyber Defense Collaborative, announced by the Cybersecurity and Infrastructure Security Agency in 2021, has already conducted AI cyber tabletop exercises and could expand its role in cyber intelligence sharing for frontier AI. The Frontier Model Forum, an association of six leading AI companies, could reduce overhead for member companies by facilitating information-sharing among member companies and serving as a point of contact for national security agencies. More ambitiously, the Bipartisan Senate AI Working Group led by Sen. Schumer noted last year that it may be appropriate to establish an AI-focused Information Sharing and Analysis Center (ISAC) as a more comprehensive interface between AI companies and national security agencies.
Government-held intelligence can also be useful for risk assessment of frontier AI. Building on a 2024 Memorandum of Understanding between the Department of Commerce and the Department of Energy, CAISI and national laboratories should continue their work on evaluations that use classified datasets to determine frontier models’ chemical and biological risks. As an example of a working partnership, the Department of Energy’s National Nuclear Security Administration evaluates Anthropic’s models for potential nuclear and radiological risks.
Given its expertise in emergency preparedness, the Department of Homeland Security (DHS) is naturally positioned to lead preparations for potential emergencies caused by malicious use or loss of control over AI systems. Emergencies that begin with AI may cascade across multiple sectors, so DHS’s established coordination role with the critical infrastructure Sector Risk Management Agencies (SRMAs) could be especially valuable. To effectively fulfill this responsibility, both DHS and partner agencies need to enhance their AI expertise.
Promoting transparency into frontier AI
Unlike earlier emerging technologies where the U.S. government had direct involvement through funding and development, frontier AI development is relatively opaque to the government. Critical information about frontier AI capabilities is currently siloed in private companies. This needs to change because transparency into frontier AI development is a prerequisite for the government and civil society to respond appropriately as capabilities advance.
Companies recognize this at some level: the Frontier AI Safety Commitments include high-level commitments to transparency regarding companies’ risk management practices. However, the abstract ideal of transparency needs to be operationalized so that the right information makes its way to the right policymakers. Policymakers need to understand how companies intend their models to behave, how well the models are aligned with these goals, what the capabilities of the models are, and how companies are preparing for risks to national security. Federal legislation could advance transparency by requiring public disclosure of critical information, empowering a government body to handle sensitive information, and providing legal protection for whistleblowers and third-party researchers.
Much critical information can and should be shared publicly. If companies do not disclose the following information voluntarily, legislative action may be required. First, every frontier AI company should release the specification used in training to define ideal behavior for its systems. As OpenAI defines it, a specification is a detailed document that describes a company’s “approach to shaping model behavior and how [the company] evaluate[s] tradeoffs when conflicts arise”. Disclosing the specification enables external experts to scrutinize the definition of ideal model behavior and assess whether certain behaviors are intended. Furthermore, disclosing the specification empowers citizens to understand the principles underlying increasingly influential technologies. OpenAI and Anthropic have already publicized their specifications—OpenAI calls theirs the Model Spec, and Anthropic calls theirs the constitution—which demonstrates that disclosure is workable from a business perspective. Second, the public needs to know enough to evaluate how companies are managing risks from their systems. Preparedness frameworks are an excellent foundation. The next step could be to give the public enough information to evaluate whether companies are adhering to their preparedness frameworks. This would require companies to disclose their approach to risk assessment, risk mitigation, and risk governance, with enough supporting evidence (from model evaluations, red teaming, forecasting exercises, etc.) for external experts to form independent opinions. Third, it is important for the public to know who has access to frontier capabilities, especially capabilities like hacking and military strategy that could assist individuals in consolidating power.
Some information about frontier AI, such as frontier capabilities and incidents of harm caused by AI systems, may require more sensitive handling. Companies may resist disclosing advanced model capabilities for competitive reasons and may hesitate to report incidents of harm due to reputational concerns. From a national security perspective, the U.S. government may wish to pursue strategic ambiguity regarding frontier capabilities, at least temporarily, to give policymakers time to develop appropriate responses. A government body such as CAISI or the Bureau of Industry and Security (BIS) could have a role to play in receiving and distributing sensitive information, as appropriate. For example, this coordinating body could pass along reports of increased cyber offense capabilities from an evaluation organization to an agency like DHS that is equipped to receive and protect sensitive information, mitigate risks from frontier capabilities, and potentially leverage them for defense. However, there are strong reasons for public disclosure even of relatively sensitive information: transparency about frontier capabilities and real-world impacts would enable civil society to respond in the national interest, and excessive government secrecy would risk international mistrust and misunderstanding.
Whistleblower protections are a crucial component of transparency to hold companies accountable to their voluntary obligations. In an open letter called “A Right to Warn about Advanced Artificial Intelligence,” current and former employees of frontier AI companies argued that “ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.” Congress or the Trump administration should establish a secure line for whistleblowers to report to a government body (such as CAISI or BIS) when companies’ stated commitments do not line up with their actions, or more broadly when companies act in a way that threatens national security. Whistleblower protections should shield AI company employees from legal retaliation. As in other sectors, monetary incentives may be appropriate to counteract the likely effects on a whistleblower’s career and reputation.
Finally, the government can promote transparency by broadening access to frontier AI systems for third-party researchers. Several third-party evaluation organizations, such as CAISI, METR, and Apollo Research, already have relationships with AI companies. These organizations conduct crucial evaluations of properties relevant to national security, including dual-use capabilities and propensities for deception. The government could foster a more holistic understanding of frontier AI development by empowering more researchers to conduct a greater variety of evaluations. Thanks to recent advances in privacy-enhancing technologies, external scrutiny of AI systems is possible without compromising privacy, security, or intellectual property. To encourage participation, external researchers would need legal safe harbor to protect them from account suspensions and legal reprisal when conducting valuable evaluations.
Developing best practices for risk management
A natural role for government in an industry with emerging risks is to facilitate the development of best practices for managing those risks. Best practices are immediately useful to AI companies: companies can adopt best practices to mitigate risks without having to devote as many of their own resources on risk research. They provide clarity to companies on what policymakers expect. By developing best practices, the government can shield U.S. companies from international regulatory burdens by arguing that these companies are already working toward safety. Best practices can help clarify “reasonable care” for liability purposes. The process of developing best practices reveals useful information for any future regulation; for example, setting intolerable risk thresholds may turn out to be tractable and helpful, or requiring a certain level of robustness against jailbreaking may be infeasible.
CAISI is positioned well to coordinate stakeholders to build consensus on best practices, and has had success already, even on a low budget. The preliminary industry and expert consensus identifies five components for companies to include in their frontier AI preparedness frameworks: risk identification, intolerable risk thresholds, capability and risk evaluations, risk mitigation research, and risk governance. For each of the five components, CAISI should bring together experts from across government, industry, academia, and civil society to develop best practices that companies can implement.
- CAISI can coordinate work on best practices for risk identification. Government expertise will be especially useful for identifying national-security-related risks.
- CAISI should provide an authoritative home for best practices on intolerable risk thresholds. AI companies are facing a challenge to the paradigm of intolerable risk thresholds: in a competitive environment, AI companies may be pressured to release models that cross risk thresholds since competitors (including in China) may release similar models anyway; guidance from CAISI on this would be helpful.
- CAISI should be funded and staffed so that it can continue leading in capability and risk evaluations and sharing what it learns from interacting with stakeholders in the process of conducting these evaluations.
- CAISI should closely track technical research on risk mitigations, and ideally conduct its own research so that it can deeply understand companies’ decisions.
- Best practices for risk governance transfer well from other fields; CAISI should draw on stakeholders’ work to clarify expectations for AI companies.
Research funders within government could significantly contribute to the development of best practices. The National Science Foundation should continue funding foundational research into safe and reliable AI. Advancing the science of evaluations should be a core funding priority, because evaluations are a central component of frontier AI preparedness frameworks. Research is needed to improve the methodological rigor and broader accessibility of evaluation tools that can assess frontier AI capabilities and identify propensities for undesirable behaviors. Meanwhile, the Defense Advanced Research Projects Agency (DARPA) and the Intelligence Advanced Research Projects Activity (IARPA) can fund more applied national security research. Improving the feasibility of securing frontier models should be a top priority: for example, research into secure model APIs could yield well-documented open-source protocols that would bring higher security levels within reach. DARPA and IARPA should also explore under-discussed areas, such as investigating how to disrupt the operation of rogue agents and developing threat models for how U.S. AI development could be sabotaged.
Conclusion
This blog post has laid out how targeted government actions can build on private sector initiatives like the Frontier AI Safety Commitments and corresponding frontier AI preparedness frameworks. By sharing national security knowledge, promoting transparency, and helping to develop best practices, the U.S. government can complement existing governance frameworks rather than reinventing them. This approach makes the most of the investments that AI companies have made into safety and security, while avoiding burdening companies with responsibilities that the government is better suited for. Many of these relatively light-touch measures can be implemented with existing authorities and resources. There is no need to wait for comprehensive regulation or legislation: the measures outlined in this blog post can already move the needle on AI preparedness.