A new framework to create alignment on national AI legislation
What is the framework?
The White House National Policy Framework for Artificial Intelligence, released on March 20, contains a series of legislative proposals for Congress to enact federal legislation governing AI-related issues. Unlike previous White House AI documents, such as the July 2025 AI Action Plan and subsequent executive orders, the National Policy Framework goes beyond laying out a high-level strategy. It explicitly calls on Congress to take legislative actions that align with the Trump administration’s AI policy goals.
The National Policy Framework is a direct followup to the December 2025 executive order “Ensuring a National Policy Framework for Artificial Intelligence,” which called for the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to prepare legislative recommendations establishing a uniform federal policy framework for AI. The December executive order followed at least two failed attempts by Congress to enact a moratorium preventing states from enforcing their own AI laws, including by conditioning states’ access to certain federal funds on not enforcing AI legislation.
What does the framework do?
As outlined in the December executive order, the underlying goal of the National Policy Framework is to preempt state AI regulation and establish federal government leadership of U.S. AI policy in accordance with the administration’s AI agenda. To do so, the administration is signaling to lawmakers the specific AI policy issues where it desires legislative action. The framework is a call to action for Congress to pass AI legislation that aligns with the White House priorities listed in the document.
What does the framework not do?
The National Policy Framework is neither federal AI regulation nor an executive order; it is a non-binding list of recommendations to Congress and contains no directives for the executive branch. Its recommendations also vary in specificity and leave many debates up for interpretation.
Signaling the administration’s AI governance priorities
How does the framework reflect the administration’s priorities?
The National Policy Framework marries some of the Trump administration’s existing AI policy priorities from the AI Action Plan, such as spurring innovation and expanding energy infrastructure, with issues that are particularly salient to state lawmakers and the public. Unlike the Action Plan, which focuses heavily on executive branch agencies, the framework calls for congressional action that aligns with the White House’s priorities and public demand signals for certain types of AI regulation.
For instance, Section I of the framework focuses on protecting children and empowering parents, which has not historically been a core part of the Trump administration’s AI strategy. However, legislative proposals to protect children from harms associated with AI usage have gained significant traction among both state and federal lawmakers. The framework’s callout of child safety may be the administration’s acknowledgment of this increasingly bipartisan issue’s importance, or an attempt to soften state-level legal preemption by foregrounding uncontroversial policy proposals likely to garner enthusiastic public support.
Overall, the framework can be interpreted as the opening move in an ongoing negotiation between the administration and congressional lawmakers from both parties over national AI legislation. We can likely expect executive and legislative stances to shift over time. While it is unlikely that Congress will pass the framework as-is, the administration may be inviting Republican lawmakers to offer alternative visions for a federal AI governance regime.
What is particularly noteworthy about the framework?
A few components of the framework are especially notable compared to the administration’s prior AI policies. For instance, portions of the framework relating to censorship and free speech build on the December executive order’s mention of state laws embedding ideological bias into models, one of the reasons cited to call for federal preemption (the administration also cited this issue in a July 2025 executive order after the release of the Action Plan). Calling for congressional action to limit executive power over AI content and expression could be an effort to futureproof this administration’s AI governance priorities around bias in generative AI outputs from future executive branch rollbacks.
Section III’s discussion of deepfakes and other forms of identity theft using AI is also noteworthy. It makes explicit exceptions for parody, satire, or other expressive uses of AI-generated digital replicas, which may be challenging to differentiate from unlawful uses. Moreover, the framework’s inclusion of this issue focuses on protecting individuals from commercial uses of their identity and violations of intellectual property rights. The administration may have included these provisions to draw upon broad support for addressing these issues and to avoid a perceived patchwork of laws across states, since many states are already regulating deepfakes and related problems. Unlike Section I, Section III does not include an explicit carveout protecting state deepfake and identity laws from preemption.
Some proposals in the framework leave it to Congress to flesh out key details. For example, Section V’s discussion of innovation and ensuring American dominance recommends establishing regulatory sandboxes, improving the AI-readiness and accessibility of federal datasets, and governing AI through sector-specific, existing authorities, but does not specify approaches to stimulate AI innovation, reduce barriers, or accelerate deployment. Similarly, Section VI discusses developing an AI-ready workforce, but does not emphasize steps that may be taken to improve AI literacy, which can help policymakers and workers understand how the technology may affect aspects of future work.
Finally, frontier AI risks are not a major focus of the document. Section II simply states that appropriate agencies should understand and plan for potential national security considerations. This is consistent with the administration’s prior stances on AI innovation and adoption. It remains to be seen how future congressional proposals will address these topics, and how they might interact with existing state-level policies.
Implementation questions
Which parts of the framework may face implementation challenges?
Section VII directs Congress to preempt state AI laws that impose undue burdens while respecting states’ rights to enforce laws of general applicability and laws related to zoning and use of AI. The framework provides three areas of AI regulation that states should not govern:
- AI development;
- The use of AI for activity that would be lawful if performed without AI; and
- AI developer liability for unlawful third party conduct involving their models.
These categories are broad and leave many questions unanswered. For instance, does “activity that would be lawful if performed without AI” cover the integration of AI decision-making into high-risk areas such as employment and healthcare, or into certain social media applications? Or is this phrase intended to prevent the enactment of safety restrictions on AI that are not present for other types of technologies, such as AI system refusals to harmful prompts?
Furthermore, the assertion that states should not be able to govern AI development or penalize developers for unlawful conduct by third party users implies that these are regulatory responsibilities reserved for the federal government, but leaves open the question of when or how Congress might decide to act on these responsibilities. Given the Trump administration’s stated light-touch approach to AI regulation, Section VII suggests that they would prefer Congress explicitly ban states from these types of regulatory actions and decline to take such action at the federal level.
Assessing the framework’s potential for impact
How aligned is the framework with existing Congressional legislative proposals?
Congress will ultimately arbitrate which of the framework’s recommendations – if any – become law. Just two days before the release of the White House framework, Senator Marsha Blackburn released a legislative discussion draft titled the TRUMP AI AMERICA Act. Their proposals highlight some shared priorities and their similar release dates may encourage alignment around key AI governance issues. The Act aims to protect children, creators, conservatives, and communities through varied mechanisms, some of which are covered in the White House framework. Other mechanisms in Senator Blackburn’s draft that are not present in the framework, including sunsetting Section 230 (enacted by the Communications Decency Act of 1996) and imposing a duty of care on AI developers, may provoke intense pushback from industry. Another area of dissonance is state preemption. Senator Blackburn’s draft only preempts state AI laws to the extent that they conflict with the bill’s protections, whereas the White House framework articulates new categories of AI regulation that states should abstain from. Although other members of Congress are proposing AI governance frameworks, Senator Blackburn’s draft is notable because of its timing and the Senator’s role in staving off earlier efforts to codify a moratorium on state AI laws.
Conclusions and future questions
The release of the framework may intensify pressure on Congress to act before the midterm elections. Many members of Congress are genuinely concerned about AI and may want to avoid being perceived as indifferent to the technology’s impacts on society. Although Congress may adopt a different legislative approach, the White House framework contains an assortment of low-hanging fruit that, judging from the overlap with Senator Blackburn’s draft, could gain traction. Recommendations that may receive bipartisan support include tracking trends in task-level workforce realignment driven by AI and ensuring agencies possess the technical capacity to understand and mitigate national security concerns associated with frontier AI.
The framework’s section on state AI preemption will likely face resistance in Congress thanks to pockets of opposition that have formed on both sides of the aisle. If resistance subsides and Congress codifies a variant of the framework’s preemption clause, some states will likely challenge the constitutionality of the law while others may be dissuaded from enforcing all or a subset of their AI laws.
Acknowledgements
Special thanks to Owen Daniels and Danny Hague for their thoughtful feedback, and other members of CSET for their ideas and contributions.