During consideration of the Big Beautiful Bill, signed into law by President Trump last month, Sen. Ted Cruz (R-Texas) inserted a measure that would have discouraged state-level regulation of artificial intelligence for a decade. The proposed moratorium, which was supported by influential figures on the so-called tech right as well as members of the Trump administration, was justified on the basis that blue states like California might go too far in restricting AI development, holding back progress and creating a 50-state patchwork for AI companies.
This attempt to preempt state AI laws backfired, alienating pro-family conservatives and violating federalist principles. In response, the Senate voted overwhelmingly to remove the provision. However, the debate it sparked raised an important and as yet unresolved question: What roles should the federal and state governments play in shaping AI policy?
The White House’s AI Action Plan, released in late July, offers the most comprehensive glance yet into the administration’s vision for artificial intelligence. Reflecting continued concerns about overreach in blue states, it suggests withholding funding from states that regulate AI too heavily. But it does not specify how the administration plans to draw that line. As the Trump administration charts the path forward for national AI policy, it must learn the right lessons from the failure of the moratorium.
“A decade is a lifetime in AI development.”
One key lesson is that the moratorium’s scope and duration were excessive. A decade is a lifetime in AI development. The proposed measure would not only have blocked states from enforcing commonsense safeguards, like guardrails on mental health chatbots, but also threatened to preempt any state law involving algorithms—including those related to data privacy and children’s safety.
More fundamentally, resilient AI governance requires America to lean into federalism, not abandon it in the name of “efficiency.” That is why states must continue to act as “laboratories of democracy— to test ideas, refine standards, and learn from failure. At a time when AI is rapidly transforming our workforce and economy, the moratorium would have stripped states of their essential role in responding to and shaping the impact of this new technology.
The outcome of the moratorium fight shows that the pro-family right has the strength and cohesion to block the tech right’s efforts to rollback commonsense regulation. But if these two factions want to achieve lasting policy wins, they need to learn how to work together to disrupt big tech monopolies while striking a balance between dynamic innovation and adaptable guardrails.
Instead of imposing a rigid, top-down regime, America’s federalist model offers the opportunity for a responsive, competitive framework for AI innovation and governance. One promising idea is a dual charter system for consumer-facing frontier AI companies and services. Just as states have different laws of incorporation, Congress could authorize both the federal government and states to issue opt-in “AI charters” to model developers and deployers.
These charters could apply to specific AI products or entire models—offering, for example, responsible on-ramps for open-source models or safe harbors for certain types of AI-powered services. They could also provide targeted liability and regulatory relief to AI start-ups.
Under this system, AI firms could either continue to navigate the patchwork of state laws or opt into a charter that grants them some degree of civil liability protection as long as they meet the grantor’s criteria for transparency, child safety, content remuneration, data stewardship, mitigation of ideological bias, and alignment with user values. States and federal entities could compete to offer charters with varying standards, just as they do with laws of incorporation. While corporate laws differ by state, once an entity is incorporated in say, Delaware, it is only subject to Delaware law with respect to certain internal affairs, even if it does business in other states.
Similarly, under this system, if Elon Musk’s xAI chartered its Grok model under Texas law, California couldn’t impose conflicting or more restrictive rules for specific issues covered by the charter. Companies would have clarity and choice, and states would remain free to legislate.
Pairing the charter model with minimum federal protections would prevent a race to the bottom while also preserving healthy competition. It would also be adaptable and responsive to evolving model properties, use-cases, and risks.
AI is too complex and fast-moving for a top-down “silver bullet” solution. The AI Action Plan, though mighty, has limited authority. That’s why Congress must partner with the states to chart a course for AI policy that is both pro-family and pro-innovation.