On October 24, the White House issued the first-ever National Security Memorandum (NSM) on Artificial Intelligence. President Biden directed his national security staff to develop the NSM in October 2023 as part of his Executive Order on Safe, Secure, and Trustworthy AI, which federal agencies have been implementing over the past year. According to a fact sheet released by the White House, the NSM focuses on three critical areas: ensuring U.S. leadership in developing “safe, secure, and trustworthy” AI, using cutting-edge AI to advance U.S. national security interests, and building an international consensus on AI governance.
CSET experts offered their reactions and insights into the new AI NSM and its implications for U.S. national security, geopolitical competitiveness, and AI development more broadly.
1. What does this National Security Memo mean and what does it do?
The NSM is meant to do three things: 1) accelerate the development of cutting-edge AI tools and capabilities while protecting our cutting-edge AI R&D from exploitation by our competitors; 2) promote adoption of advanced AI capabilities to address our national security needs while ensuring these uses of AI follow the law, protect human rights, and advance democratic values; and 3) help other nations to do the same as they use AI to meet their national needs.
To accomplish these three things, the NSM aims to accelerate AI development by prioritizing efforts to ensure researchers beyond big AI companies can continue with meaningful AI research. It also supports the safe development of the next generation of supercomputers and related technologies, partially through the efforts of the AI Safety Institute. It takes action to improve the security and diversity of AI-relevant supply chains from exploitation by competitors and directs government agencies to help the U.S.’s AI industry protect itself from foreign espionage and exploitation. To facilitate government adoption of AI, it directs reviews of relevant personnel policies and acquisition practices, and it creates a Governance and Risk Management Framework to align these government uses of AI with American values and to guide uses of AI for national security applications. Finally, it directs the federal government to create a stable, responsible, and rights-respecting multinational governance framework for AI.
— Igor Mikolic-Torreira, Director of Analysis
2. How does the NSM fit with the Biden administration’s previous AI policies?
The three pillars underpinning this NSM on AI closely follow and advance the principles of the Biden administration’s previous AI policies. Whether it is U.S. government efforts to slow China’s ability to use AI for its military modernization by restricting exports of chips and semiconductor manufacturing equipment to China or diplomatically developing AI governance frameworks with allies and partners, these pillars are foundational to the U.S. government’s existing efforts.
While the NSM largely focuses on how the U.S. government will adopt and use AI, U.S.-China competition looms large as a backdrop. The memorandum seeks to ensure that U.S. government agencies and the intelligence community are able to take advantage of the opportunities that AI presents for national security missions, but it is in part motivated by wanting to ensure that rivals like China do not harness those opportunities and undermine U.S. competitiveness, both militarily and economically.
— Hanna Dohmen, Research Analyst
3. What does the NSM say about computing resources and AI?
The NSM focuses on frontier AI: a class of increasingly large and capable machine learning models, based on the transformer architecture. Developing these models requires orders of magnitude greater computational resources than were needed for previous types of AI models. For that reason, the NSM and APNSA Sullivan’s remarks focused heavily on protecting and promoting computational resources, including high-end AI chips and the specialized server boxes into which they are integrated, as well as the software, networking, and physical infrastructure used to connect, power, and cool servers in a datacenter.
To protect computational resources, the Biden administration has employed export controls to hinder competitors like the PRC from acquiring or manufacturing the specialized AI chips used to train large AI models. To further protect this chip technology, the NSM aims to ensure that the President’s intelligence priorities will assess foreign threats to the U.S. semiconductor industry.
To promote computational resources, the CHIPS Act is already spurring investment in domestic facilities to fabricate leading-edge chips, including AI chips. The NSM builds on the CHIPS Act by directing the Departments of Defense and Homeland Security to attract talent for semiconductor design and production. The document also reiterates the importance of providing computational resources to under-resourced organizations through the National AI Research Resource.
— Jacob Feldgoise, Data Research Analyst
4. How does the NSM’s focus on frontier AI models apply to China?
The NSM makes clear the Biden administration’s desire to cement the United States’ current advantage in developing frontier AI and fear of strategic surprise. Like other White House AI documents, however, the NSM does not specify military applications that would be enabled by state-of-the-art AI systems. Whereas there are clear potential applications of frontier models for military decision-making and logistics, these have largely yet to be proven. Meanwhile, there are many applications for smaller models and edge compute for drones and other hardware, neither of which feature in this NSM.
That said, the administration is correct to worry about China’s potential development of AI-enabled decision support systems. Unlike the U.S. military, which delegates decision-making authority down the chain of command, the People’s Liberation Army (PLA) may struggle to make decisions on future battlefields due to its bureaucratic and centralized decision-making culture. AI-enabled systems, some Chinese defense experts believe, could allow the PLA to automate decision-making, potentially obviating the PLA’s traditional decision difficulties. Of course, an over-dependence on AI-enabled decision support tools will present other risks, both for the PLA and its adversaries.
— Sam Bresnick, Research Fellow
5. What are tech companies doing for national security and why does the NSM prioritize protecting the AI industry?
Currently, U.S. companies are producing the most powerful and capable generative AI tools. The are also making significant capital investments in computing resources to build even more capable models. These companies have been sharing their concerns about how their models might be targeted by adversaries, as well as related concerns about how commercially available AI tools might be misused by bad actors. Whereas technology companies are extremely knowledgeable about AI, the U.S. government is more knowledgeable when it comes to dealing with foreign threats, both nation-states or terrorists. The NSM makes clear that the government sees AI companies as a strategic asset that should be protected, otherwise it could stand to lose the technical, economic, and military advantage these companies enable.
Relatedly, U.S. tech companies are more engaged in national security issues than ever. Many tech companies took a stand against Russia when it invaded Ukraine by providing technical services and products. Now these companies are strengthening their relationship with the U.S. military to find ways to better secure the nation. We wrote about how critical those tech partnerships were to the 18th Airborne’s successful Maven Smart System—a tool which has dramatically improved the artillery fires process for the Army. There are many more opportunities with generative AI; however, the process for licensing these services is still too slow and not enough servicemembers have been exposed to what generative AI might do for them. Tech companies have been clamoring for revisions to DOD policies that will speed the process up.
— Emelia Probasco, Senior Fellow
6. The NSM “doubles down” on the National AI Research Resource. What is the NAIRR and why is the White House scaling it up?
The National Artificial Intelligence Research Resource (NAIRR) provides U.S. researchers with computational, data, and training resources for AI discovery and innovation. It intends to limit the access divide between well-resourced companies and under-resourced organizations, such as those in academia, as well as to support research that industry may overlook or deprioritize. A pilot version of the NAIRR began in January 2024 as a proof of concept.
The White House “doubling down” on the NAIRR suggests two things. First, the NAIRR pilot has likely proven effective (to a degree) in supporting under-resourced organizations and diversifying AI research. These early returns may merit additional resources and expanding the pilot. However, questions remain over whether new computing resources will be provided and if and when the NAIRR will be expanded beyond the pilot phase. Second, the White House appears increasingly concerned about resource disparities between industry and organizations outside of the private sector. Going forward, it will be important to track the amount of computing resources provided to the NAIRR, the extent to which it bridges the gap with industry, and how those resources translate to effective research and impact.
— Kyle Miller, Research Analyst
7. What does the NSM mean for the AI Safety Institute, its interactions with U.S. industry, and its role in the U.S. government?
The U.S. AI Safety Institute (AISI) has been a core part of the administration’s efforts to promote safe and trustworthy AI, and this memo continues that trend. The NSM details three core activities for the AISI: testing, evaluation, and management of AI risks. The memo assigns full or partial responsibility to the AISI for eight deliverables with deadlines in the next year. Some of these deliverables are in line with the expectations already set by last year’s Executive Order on AI, but this memo expands the responsibilities of the AISI as well. The AISI may need new types of talent to fulfill its additional responsibilities.
The activities of the AISI have, so far, been restricted to the development of guidance and best practices. The NSM expands the scope of the AISI in three directions: technical research, coordination with industry, and coordination between government agencies. The NSM assigns the AISI with actively testing models, developing benchmarks, and evaluating the usefulness of risk mitigation techniques. These are very technical tasks and require top talent to achieve them. The NSM also assigns the AISI as the central node for communication between the U.S. government and industry. This scope increase lacks detail, and might be covered by the existing consortium managed by the AISI, but may risk burdening the AISI with complaints from vendors who have had their AI products revoked by a government purchaser. To avoid this, the AISI will need to clarify and communicate its own role, and the roles of purchasing agencies, in AI testing—a task that will require bureaucratic expertise. Finally, many of the tasks assigned to the AISI involve coordinating the development of testing and evaluation for AI risks with “expert agencies,” such as the National Security Agency for cybersecurity risks. The respective roles and responsibilities of the AISI and other “expert agencies” are unclear. To effectively assign and assume duties in these interagency partnerships will require the AISI to develop further bureaucratic expertise.
The AISI is a natural vertex for the tasks and expectations assigned to it, but they fall on an AISI that lacks significant funding and staff. If the aims of this memorandum are to be met then the AISI will require additional resources and institutional support.
– Colin Shea-Blymyer, Research Fellow
8. What are the near- and longer-term implications of the NSM for national security?
In the near term, the National Security Memo appears focused on providing impetus to the broader national security establishment to seek out innovative applications of AI, compared to narrower efforts related to AI and emerging technologies like the Department of Defense’s Replicator initiative. The NSM also highlights known hurdles the national security establishment must overcome, such as the urgent need to reform acquisition and procurement systems for AI capabilities, in order to translate American AI ecosystem strengths like attracting top talent and accessing leading-edge compute into more concrete military advantages over competitors.
In the long-term trajectory of AI for national security, the NSM reinforces other costly signals the U.S. government has sent to allies, partners, and competitors about its intentions to develop military AI capabilities responsibly and in line with democratic values and respect for human rights, civil liberties, and international law. The document lays down a marker of the Biden administration’s efforts in this area before the end of its term, and bolsters past U.S. initiatives including the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, the DoD’s Ethical AI Principles, and DoD Directive 3000.09 on Autonomy in Weapons Systems. Whether competitors interpret such signals at face value or abide by similar commitments to responsible military AI development themselves remains to be seen, and merits careful monitoring going forward.
— Owen Daniels, Associate Director of Analysis
Stay Informed
For essential updates on AI, emerging technology, and security policy, subscribe to CSET’s newsletter: policy.ai.
Media Contact
For media inquiries, please contact danny.hague@georgetown.edu