On Wednesday, the White House released its long-awaited AI Action Plan, a 23-page document that outlines the administration’s plans for “winning the AI race” and ushering in “a new golden age of human flourishing, economic competitiveness, and national security for the American people.”
The plan was put together over 180 days as directed by President Trump in his January 23 executive order on “Removing Barriers to American Leadership in Artificial Intelligence.” It was authored by Office of Science and Technology Policy (OSTP) Director Michael J. Kratsios, Special Advisor for AI and Crypto David Sacks, and Secretary of State Marco Rubio in his capacity as Assistant to the President for National Security Affairs.
The plan casts a wide net — it touches on issues ranging from energy infrastructure expansion to preparing for AI workforce disruptions — but its scope is still limited: the plan’s key proposals are just that — proposals — and are restricted to the tools available to the federal government and its constituent agencies. It is also focused largely on near-term actions rather than long-term vision.
But that near-term focus is not necessarily to the plan’s detriment. It reflects and incorporates many of the policy recommendations laid out by the more than 10,000 respondents to the National Science Foundation and OSTP’s request for information issued earlier this year.
Nor is the plan purely aspirational: to coincide with the launch of the AI Action Plan, President Trump signed three executive orders that operationalize some of its recommendations:
- The first EO — Accelerating Federal Permitting of Data Center Infrastructure — addresses AI infrastructure, including energy and permitting reform related to the AI data center buildout
- The second EO — Promoting the Export of the American AI Technology Stack — focuses on the export of U.S. AI-related technologies, as described in Part III of the AI Action Plan (see below for more)
- The third EO — Preventing Woke AI in the Federal Government — is expected to focus on eliminating perceived ideological bias in language models by revising federal procurement guidelines, as advised by the AI Action Plan (see below for more)
The AI Action Plan is split into three sections: Pillar I: Accelerate AI Innovation, Pillar II: Build American AI Infrastructure, and Pillar III: Lead in International AI Diplomacy and Security. Highlights from each section include:
Pillar I: Accelerate AI Innovation
The longest of the three sections, the plan’s first pillar focuses on creating a domestic environment where AI innovation can flourish, driven by deregulation, strategic investment in research, and accelerated adoption across industry and government. This pillar is broken down into 15 sub-sections:
- Remove Red Tape and Onerous Regulation: The plan takes a clear deregulatory stance, directing the Office of Management and Budget (OMB) to lead a government-wide effort to “identify, revise, or repeal regulations” that hinder AI development. Related to that effort, the plan announces that OSTP plans to issue a request for information about current regulations that slow down AI development. The plan also proposes withholding federal funds as a lever against state-level regulation, recommending that agencies consider a state’s AI regulatory climate when making funding decisions. This recommendation seems closely aligned with recommendations made by several major companies that voiced concerns about “patchwork” AI regulations in their RFI responses earlier this year, but also closely mirrors a recently-proposed congressional moratorium on state-level AI regulation that was scuttled after pushback. Whether a similar effort centered in the executive branch would face the same level of resistance is an open question.
- Ensure that Frontier AI Protects Free Speech and American Values: In a move that aligns with a planned executive order, the plan seeks to combat perceived ideological bias in AI. It calls for revising the National Institute of Standards and Technology (NIST) AI Risk Management Framework to remove references to concepts like misinformation (though, notably, the AI RMF does not specifically mention misinformation or disinformation), climate change, and Diversity, Equity, and Inclusion. It would also update federal procurement guidelines to ensure government contractors only use LLMs that are “objective and free from top-down ideological bias.” This aspect was operationalized by President Trump’s same-day executive order: Preventing Woke AI in the Federal Government. In a related effort clearly focused on increasingly popular models from Chinese developers, the plan directs NIST to evaluate frontier models from China for alignment with Chinese Communist Party directives.
- Encourage Open-Source and Open-Weight AI: The administration throws its weight behind open models, arguing that the U.S. must lead in developing “open models founded on American values.” The plan proposes actions to improve the financial market for computing power and to expand access to compute through the National AI Research Resource (NAIRR) pilot program.
- Enable AI Adoption: To spur the use of AI across American industry, the plan proposes establishing regulatory sandboxes and “AI Centers of Excellence” where companies can test new tools with a commitment to sharing data and results. It also calls for domain-specific efforts in sectors like healthcare and agriculture to accelerate the creation of national standards and to measure AI’s impact on productivity.
- Empower American Workers in the Age of AI: The plan acknowledges that AI will transform the labor market and calls for a “serious workforce response.” The proposals focus primarily on expanding AI literacy programs and directing federal agencies to continuously evaluate AI’s impact on American jobs.
- Support Next-Generation Manufacturing: This section advocates for using a suite of existing federal programs — including the Small Business Innovation Research program, CHIPS R&D funds, and authorities under the Defense Production Act — to invest in and scale up advanced manufacturing technologies.
- Build World-Class Scientific Datasets: In order to increase access to high-quality data, the plan proposes a number of data-related initiatives and incentives, including establishing secure computing environments within the National Science Foundation (NSF) and Department of Energy (DOE) to allow AI research on restricted government data.
- Advance the Science of AI: The plan calls for prioritizing investment in theoretical, computational, and experimental research to maintain U.S. leadership in discovering new AI paradigms. The plan says this priority will be formally included in the forthcoming National AI R&D Strategic Plan.
- Invest in AI Interpretability, Control, and Robustness Breakthroughs: Citing the challenge of using unpredictable AI systems in high-stakes national security applications, the plan calls for a new technology development program led by the Defense Advanced Research Projects Agency (DARPA) to pursue breakthroughs in AI interpretability, control, and robustness. It also recommends that federal agencies convene an “AI hackathon” to crowdsource vulnerability testing for AI systems.
- Build an AI Evaluations Ecosystem: The plan outlines a push to create a national evaluations ecosystem. Key recommendations include having NIST publish guidelines for federal agencies to conduct their own evaluations, investing in the development of AI testbeds for piloting systems in secure, real-world settings, and convening the NIST AI Consortium to establish new measurement science for identifying scalable and interoperable evaluation techniques.
- Accelerate AI Adoption in Government: The plan outlines several steps to boost the federal government’s own use of AI. It proposes a GSA-managed “AI procurement toolbox” to create a uniform, streamlined process for agencies to acquire and customize various AI models. It would also mandate that all federal employees whose work could be improved by AI are given access to and training for frontier language models.
- Drive Adoption of AI within the Department of Defense: While the proposals are largely focused on improving talent and processes rather than specific warfighting applications, one recommendation stands out: prioritizing DOD-led agreements with private cloud service providers to codify priority access to computing resources in the event of a national emergency or major conflict. The plan also calls for the DOD to develop a streamlined process for evaluating its major operational workflows to identify and prioritize opportunities for automation with AI, with the goal of permanently transitioning successful automations into practice as quickly as practicable.
- Protect Commercial and Government AI Innovations: Specifically recognizing the security risks to U.S. AI companies, talent, intellectual property, and AI systems, the plan recommends a collaboration between federal agencies—including the DOD, DHS, and the Intelligence Community—and leading American AI developers. The partnership is intended to help the private sector actively protect its AI innovations from a range of security risks, such as malicious cyber actors and insider threats.
- Combat Synthetic Media in the Legal System: Acknowledging that AI-generated media poses “novel challenges to the legal system,” the plan focuses on equipping the justice system with the necessary tools to deal with an influx of fabricated information.
Pillar II: Build American AI Infrastructure
The second section is focused primarily on how to build out the new infrastructure — energy infrastructure in particular — needed to compete in AI. Key recommendations from this section were swiftly operationalized by President Trump’s same-day executive order on Accelerating Federal Permitting of Data Center Infrastructure. This section is broken down into eight subsections:
- Create Streamlined Permitting for Data Centers, Semiconductor Manufacturing Facilities, and Energy Infrastructure while Guaranteeing Security: To accelerate construction, the plan calls for significant permitting reforms and categorical exclusions. It also recommends that federal agencies be directed to make federal lands available for the construction of data centers and the power generation infrastructure needed to support them. It also recommends expanding efforts to use AI to speed up environmental reviews.
- Develop a Grid to Match the Pace of AI Innovation: The plan advocates for a comprehensive strategy to expand the U.S. power grid. This includes prioritizing new energy generation technologies like enhanced geothermal, nuclear fission, and nuclear fusion. Notably, the plan makes no mention of green or renewable energy sources.
- Restore American Semiconductor Manufacturing: The plan advocates for revitalizing domestic semiconductor manufacturing without what it calls “bad deals for the American taxpayer or saddling companies with sweeping ideological agendas,” a clear reference to the Biden administration’s policy of tying CHIPS Act subsidies to a range of commitments from potential recipients, such as providing childcare for workers. A “revamped” CHIPS Program Office would lead this effort, focused on delivering a “strong return on investment” and removing “all extraneous policy requirements” for projects receiving CHIPS funding
- Build High-Security Data Centers for Military and Intelligence Community Usage: The plan calls for new technical standards for high-security facilities. This effort would be led by the DOD, Intelligence Community, and NIST, in collaboration with industry partners.
- Train a Skilled Workforce for AI Infrastructure: The plan calls for a national initiative, led by the Departments of Labor and Commerce, to identify high-priority occupations essential for the AI infrastructure buildout. It also proposes federal partnerships with state and local governments to support industry-driven training programs and expand early career exposure and pre-apprenticeships for middle and high school students.
- Bolster Critical Infrastructure Cybersecurity: To counter threats like data poisoning and adversarial attacks on AI systems used in critical infrastructure, the plan proposes establishing an AI Information Sharing and Analysis Center (AI-ISAC). Led by DHS, the AI-ISAC would promote the sharing of AI-security threat intelligence across U.S. critical infrastructure sectors. The plan also calls for ensuring known AI vulnerabilities are shared with the private sector when appropriate.
- Promote Secure-By-Design AI Technologies and Applications: This section focuses on embedding security into the development process. It calls for DOD to continue refining its Responsible AI and Generative AI Frameworks and Toolkits. It also directs the Office of the Director of National Intelligence (ODNI) to publish an Intelligence Community Standard on AI Assurance.
- Promote Mature Federal Capacity for AI Incident Response: The plan aims to formally incorporate AI incident response into existing cybersecurity doctrine for both the public and private sectors. Key actions include directing NIST to partner with industry to establish AI-inclusive incident response standards and modifying CISA’s Cybersecurity Incident & Vulnerability Response Playbooks to include requirements for Chief Information Security Officers to consult with their agency’s Chief AI Officer during an incident.
Pillar III: Lead in International AI Diplomacy and Security
The third and final pillar is focused on leveraging the current U.S. lead in AI to maintain an advantage. The strategy detailed in this section is two-pronged: actively promoting the export of U.S. technology and standards to allies while limiting the access of strategic rivals. It is broken into seven subsections:
- Export American AI to Allies and Partners: The plan proposes a program to facilitate the export of U.S. hardware, models, software, and applications. The Department of Commerce would lead this effort by gathering proposals from industry consortia for “full-stack AI export packages.” Various government bodies, including the State Department and the U.S. International Development Finance Corporation, would then coordinate to facilitate deals that meet U.S. security requirements. This section’s recommendations were quickly operationalized by President Trump’s executive order on Promoting the Export of the American AI Technology Stack, which was signed the same day. That order tasks Secretary of Commerce Howard Lutnick, in consultation with Secretary of State Rubio and OSTP Director Kratsios, with establishing and implementing an “American AI Exports Program” within 90 days of the order’s signing.
- Counter Chinese Influence in International Governance Bodies: Citing concerns about Chinese influence in international standard-setting bodies like the UN and the International Telecommunication Union, the plan calls for a more assertive U.S. posture. The Departments of State and Commerce would lead an effort to leverage the U.S. position in these organizations to advocate for governance approaches that promote innovation, reflect American values, and counter authoritarian influence.
- Strengthen AI Compute Export Control Enforcement: This section details a plan to ensure advanced AI chips do not end up in countries of concern. Actions include exploring the use of new location verification features on advanced chips and establishing a new Department of Commerce effort to collaborate with the Intelligence Community on global enforcement, a move that could help offset the Bureau of Industry and Security’s notoriously slim budget without congressional action. This enhanced effort would include increasing end-use monitoring in countries with a high risk of diversion.
- Plug Loopholes in Existing Semiconductor Manufacturing Export Controls: The plan calls for the Commerce Department to “plug loopholes” in the current export control regime by developing new controls on semiconductor manufacturing subsystems.
- Align Protection Measures Globally: The plan stresses that the U.S. must encourage allies and partners to adopt complementary export controls and not “backfill” by supplying adversaries with controlled technologies. Should allies fail to do so, the plan suggests using tools like the Foreign Direct Product Rule and secondary tariffs to achieve alignment.
- Ensure that the U.S. Government is at the Forefront of Evaluating National Security Risks in Frontier Models: Arguing that the risks present in American frontier models are a preview of future adversary capabilities, the plan calls for a robust government evaluation effort. This includes evaluating frontier AI systems for national security risks — such as CBRNE and cyber applications—in partnership with developers. This effort would be led by NIST’s Center for AI Standards and Innovation (CAISI), which would also evaluate adversaries’ AI systems for vulnerabilities like backdoors.
- Invest in Biosecurity: Acknowledging that AI could create new pathways for malicious actors to synthesize harmful pathogens, the plan proposes several new safeguards. It calls for requiring all institutions receiving federal research funding to use nucleic acid synthesis providers that have robust screening capabilities, with enforcement mechanisms to ensure compliance. It also recommends that OSTP convene government and industry to develop a data-sharing mechanism to screen for malicious customers.
While the plan is quite comprehensive, there are also notable omissions: as we noted above, the recommended energy expansion mentions technologies like geothermal, nuclear fission, and nuclear fusion, but leaves out technologies like solar and wind. Most notable of all, however, is the plan’s silence on immigration policy. A significant number of the responses to OSTP’s RFI — including Google’s — stressed the importance of immigration to maintaining a U.S. AI talent advantage. The plan’s omission of the topic is notable, but not entirely surprising — high-skilled immigration has proven to be a contentious issue among President Trump’s backers.
Early reaction to the plan has been largely positive, however, perhaps because the plan did incorporate so many of the ideas and recommendations proposed by many of the most prominent respondents to OSTP’s Request for Information. Recommendations like promoting open-source and open-weights models, streamlined permitting, pre-empting state and local regulation, enhanced lab and datacenter security, and continued federal research into AI interpretability all made the cut.
For more, subscribe to CSET’s monthly newsletter on AI and security policy, subscribe to policy.ai.
For media inquiries, contact danny.hague@georgetown.edu