A First Principles Resolution to Distribution of AI Policy Authority, by Kevin Frazier
A few months ago, Dr. Benjamin Jensen, director of the Futures Lab and a senior fellow for the Defense and Security Department at the Center for Strategic and International Studies, testified before Congress that the nation that leads in AI will shape the future. Nothing has changed in the interim. What remains uncertain, however, is whether the U.S. will retain its leading position as our adversaries bend their laws and levy their resources to relentlessly pursue technological horizons.
That uncertainty is partially the result of an unsettled question: the proper role of the states and federal government in regulating AI. The Founders offered a clear answer in their decision to abandon the Articles of Confederation and adopt the Constitution.
Three principles deliberately infused into our Constitution by the Founders resolve debates about the authority of each actor when applied to the current AI discussion. Subsequent developments in related areas of the law—namely, the Commerce Clause—have given rise to the false impression that muddled judicial interpretations somehow relaxed or blurred these principles. However, they remain as foundational today as they were more than 200 years ago. Adherence to these principles is essential both as a matter of fidelity to the Founders’ vision and as a means to secure an AI regulatory posture that does not directly run afoul of the Constitution.
The first is that the federal government alone is responsible for matters that implicate the economic and political stability of the country, while states maintain considerable discretion to address local concerns. Within this framework, it cannot be the case that the absence of an affirmative federal response to such a national issue invites or permits state action. The idea that “each state has the authority to set for itself the limit of its regulatory powers” invites the serial testing of state authority—an exercise that, if replicated by 50 states, will result in jurisdictional squabbles at a minimum and, potentially, national discord with respect to issues of national concern.
Worry about states interfering with national affairs animated the transition from the Articles of Confederation to the Constitution. While policy experimentation by states has received praise in more recent times, Hamilton lamented experiments that undermine uniformity with respect to areas like trade and diplomacy. He and others—namely, the coauthors of the Federalist Papers—acknowledged that “different regulations of the different States” over things such as currency could undermine the nation’s economic competitiveness. Such concerns drove them to intentionally and explicitly allocate to the federal government the powers necessary to “provide for the harmony and proper intercourse among the States.” This distribution was all the more important when it came to securing the national interest during crises. Jefferson lamented “a want of sufficient means at their [Congress’s] disposal to answer the public exigencies and of vigor to draw forth those means.” Matters such as contagions, wars, and economic collapse surely qualified as such exigencies.
The emerging threats to national security and economic stability posed by AI advances place regulation of frontier AI models squarely and exclusively in the authority of the national government. To focus on one example, AI has lowered the barriers to the creation and deployment of bioweapons by bad actors. Defensive measures, however, have not progressed at the same rate. Researchers from Georgia Tech and Yale recently concluded that the technical countermeasures to identify and mitigate such harms rest on unfounded assumptions. They warn that even with significant technical progress, the nation must pursue alternative strategies to ready ourselves for a near-future in which synthetic pathogens go undetected and uncontained. This is a national endeavor that cannot be waylaid by state laws, no matter how well intentioned.
Second, the extensive authorities reserved to each state end at their respective borders. As the Supreme Court has specified on multiple occasions, the equal sovereignty of the states is a fundamental principle of our Constitution. Our constitutional order does not condone one state to intentionally and substantially interfere with the liberty and freedom of another. Political clout, economic might, and sheer population does not grant any state the authority to step into the shoes of the federal government. At the Founding, Virginia made up about 20 percent of the nation’s population and was home to several of the country’s current and future leaders. Then and now, it had no more authority to directly or indirectly steer national matters than Delaware (or any other state, small or otherwise).
The constitutional order designed by the Founders renders all states “equal in power, dignity, and authority.” They had experience with large states leveraging their economic might and geographic advantages as a means to benefit from their neighbors. By way of example, the “king of New York” drew the ire of early Americans for requiring that all vessels transiting through its waters pay an entrance and clearance fee. It comes as no surprise that the Founders celebrated constitutional provisions that would foreclose this sort of big state tyranny. As was later recognized by the Supreme Court, “[o]ne cardinal rule, underlying all the relations of the States to each other, is that of equality of right. Each state stands on the same level with all the rest.” That rule is fundamentally broken if states can effectively compel non-residents to bend to the preferences of their respective people. In short, such an outcome would be “inconsistent with the spirit of a Constitution written in the wake of revolution against an imperial power.”
Whether a state is the fourth largest economy in the world or the one hundredth and fourth should have no bearing on its authority to shape the lives of Americans beyond its borders. Though the Supreme Court has acknowledged and tolerated the inevitability of some regulatory spillover, cases like National Pork Producers Council v. Ross do not permit the sorts of regulations pending before many state legislatures—regulations that may deny all Americans access to a good itself because of the preferences of one political community. The state law at issue in that case—a prohibition in California on the sale of pork raised under certain conditions—did not compel out-of-state producers to wholly change their entire production process nor alter the nature of the underlying product. That is not the case when it comes to state regulation addressing the development of frontier AI models.
Building new pig pens to satisfy the preferences of Californians is technically and financially feasible, but training two AI frontier models—one to comply with the preferences of the Bay Area and one for the rest of us—is a billion dollar undertaking that rests on uncertain and evolving technical realities. Labs cannot afford to conduct 50 different training runs, each of which may last for several months and involve an incredible amount of data and computation. Any changes they make to that process as a result of state regulation will impact the model made available to the rest of the country (and, in some cases, billions of people around the world). Contradictory and vague laws in California, Colorado, and New York may thwart technological progress that has long fueled the American Dream by forcing labs to alter their training schedules. Non-residents of those states will bear considerable costs as a result; progress delayed is progress denied. Some Americans may never experience the improvements in education, healthcare, and transportation that could have been realized by a national approach to AI development.
States can and should implement regulations responsive to the demands of their political community and demonstrated to achieve those ends. Finding that fine line will be an iterative approach through which states can assess if such laws operate as intended, while also not infringing on the rights of non-residents nor impeding national AI progress. State legislators can facilitate that approach through the adoption of retrospective review requirements, sunset clauses, and frequent independent evaluations. Moreover, states can first test interventions with little to no odds of interfering with the actual AI tools, such as increased training for end users, AI literacy programs, and procurement standards that align with the state’s values.
The third principle, related to the second, is that the ultimate authority in our constitutional system rests with the people. Our Founders labored to develop a system in which every citizen could exercise meaningful control over their daily lives. This included freedom of movement and of political autonomy, and, more generally, a social contract with the government.
Consent was, is, and must be an aspect of that social contract. Under the weak authority of the central government, however, citizens living at the time of the Articles of Confederation found their lives being dictated by decisions made in other states. As previously mentioned, states often imposed significant financial impositions on non-residents, who struggled to receive any protection from their state or the national government. They also often had their ability to enforce legal rights foreclosed because of an unwillingness of some states to recognize judgments rendered by another state’s courts. These and related issues animated the Founders to consolidate power in the federal government, clarify the role of the states, and constrain the extent to which one state could interfere with another.
Extraterritorial regulation of AI jeopardizes these and other core features of individual agency. The nature of AI development means if labs are compelled to comply with one state’s requirements for model training and release, those requirements will be imposed on the rest of the country—rendering us all less likely to realize the benefits of AI advances in as affordable and expeditious a fashion as possible. Americans may be able to move as freely as they’d like, but they will still find themselves under the thumbs of state legislators over which they have no control. Such a world is the antithesis of liberty. Though we’ve seen this dynamic play out in other contexts such as emissions regulations, AI is distinct. Denial or delay of the most sophisticated AI as a result of flawed state legislation is not a matter of inconvenience; it’s a question of access to the greatest driver of human flourishing we’ve yet to develop.
In closing, the question before Congress is not whether to protect Americans from the genuine risks posed by advanced AI, but how to do so while preserving the constitutional design that has long powered American prosperity. The Founders centralized those matters that make or break the nation’s economic and political stability, reserved to the states the authority to govern local conduct, and rejected any arrangement that let one state rule another by virtue of the size of its economy or its voting power. Applied here, that design yields a clear rule of decision: the development of frontier AI models is a national undertaking; the uses of those systems within a state are a proper subject of deployment rules tailored to local concerns.
The path forward is to protect people where harms actually occur, at the point of use, while governing the direction and pace of AI advances uniformly at the federal level. States should be empowered—indeed, encouraged—to police unfair and deceptive practices, to adopt procurement standards, and to specify disclosure obligations in specific contexts. Congress should take up the regulatory tasks with nationwide consequences.
Kevin Frazier is an AI Innovation and Law Fellow at Texas Law. Excellent research assistance by Sven Lohse was made possible by a grant from the Institute for Humane Studies (grant no. IHS019239).

