Blog

The National Policy Framework on Artificial Intelligence: Implications for Employers Using AI

April 1, 2026
Estimated Read Time: 5 mins

The White House just released the long-awaited National Policy Framework for Artificial Intelligence—a set of legislative recommendations surrounding the use, development, and regulatory framework of AI (the “Policy Framework”). The Policy Framework builds upon prior Federal efforts to define a uniform approach to AI, including President Trump’s December 11, 2025 Executive Order. On its face, the Policy Framework attempts to significantly preempt various state AI laws.

Policy Framework - Section VII 

Section VII recommends Congress adopt a Federal AI framework that would expressly preempt applicable state laws. Specifically, Section VII instructs Congress to adopt a Federal standard to stand in place of “State AI laws that impose undue burdens.” Section VII further dictates the Federal standard be “minimally burdensome.” 

While the Policy Framework does not define “undue burdens” or “minimally burdensome,” Section VII gives examples of state regulation that would not be preempted. For example, Section VII acknowledges states’ rights to enforce “laws of general applicability,” such as “laws to protect consumers.” Notably, however, the Policy Framework does not include any “minimally burdensome” requirements or standards governing the use of AI in the employment context.

In addition, Section VII of the Policy Framework includes examples of what states should not regulate, lest they face a preemption challenge. Indeed, Section VII warns states against regulating areas “better suited to the Federal Government” or acting “contrary to the United States’ national strategy to achieve global AI dominance.” For example, Section VII provides that states should not be permitted to regulate AI development. Section VII also provides that states should not penalize AI developers for a third-party’s unlawful conduct when using those developers’ AI software. The Policy Framework’s attempt to insulate AI developers from liability based on the acts of third parties is indicative of the Trump Administration’s push for AI innovation. 

The Growing State and Federal Regulatory Tension

In contrast with the Policy Framework’s minimalistic regulatory approach, many states have enacted detailed legislative controls on the use of AI and automated decision-making tools—including in the employment context. Accordingly, the Policy Framework may add to the regulatory tension that already exists between many state laws and Federal attempts to centralize AI regulation. 

For example, California now has two separate agencies: the Civil Rights Department and the California Privacy Protection Agency, each of which regulate the use of AI in employment. These agencies require both AI users and vendors to adhere to various due diligence and recordkeeping obligations. California also enacted recent regulations providing for potential joint liability between an employer and the developer of AI tools used by that employer. Additionally, California created another category of whistleblowing activity relevant for creators of AI models under the new California SB 53. 

Elsewhere, Colorado has comprehensive AI legislation taking effect in June[1], while New York’s RAISE Act imposes safety protocols on large AI developers. Illinois recently expanded the state’s regulations on the use of AI in employment and imposes notice requirements whenever AI is used to influence or facilitate employment decisions.

There has also been an uptick in AI-based civil litigation against employers and AI developers related to the use of AI tools in the workplace—such as in the recruiting context. For example, several “AI-related” class action lawsuits have been filed under Title VII of the Civil Rights Act and the Fair Credit Reporting Act against employers and AI developers. 

The shifting legal framework surrounding both the use of AI and the varying applicability of both new and existing state and federal laws leaves many employers uncertain of best practices moving forward. 

What This Means for Employers

To date, there is no governing Federal legislation that preempts the ever-increasing patchwork of state AI laws. As a result, multistate employers must be cognizant of the variety of AI regulations that impact the employment lifecycle which, if not carefully navigated, can create risk. 

But even if Congress were to pass legislation identical to the Policy Framework, some existing areas of legal risk at the state and Federal level would likely remain. As noted above, the Policy Framework does not apply to rules of “general applicability”—such as consumer protection and general employment laws. Thus, existing laws that are not specific to AI tools—but are implicated by their use—would likely still create exposure for employers. 

Additionally, the Federal push to insulate third-party AI developers from liability may leave employers singularly exposed for using AI tools in the workplace. 

Businesses that use or plan to use AI in any aspect of the employment relationship should work with experienced employment counsel to mitigate risk and stay abreast of state and federal AI regulations. 

FOOTNOTES

[1] While the Colorado AI Act is scheduled to go live in June, there have been significant efforts to delay, amend, and repeal the Act, including via a replacement bill by Colorado’s AI Policy Work Group. The new bill, which has strong support from Governor Jared Polis, would do away with the “high-risk” label mirrored in the EU AI Act, and would instead focus on regulating automatic decision-making like California and New York City.  

Tags: Artificial Intelligence, Compliance, Employment

Disclaimer: This alert is provided for information purposes only and does not constitute legal advice and is not intended to form an attorney client relationship. Please contact your Sheppard attorney contact for additional information.

Share Via: