From Paralysis to Progress: Harnessing Abundance to Deregulate and Accelerate AI Development, by Kevin Frazier
Accumulation of well-meaning procedures can, unintentionally, grind progress to a halt. This is commonsensical, simple, and the central message of the Abundance movement. Yet, despite being popularized by Ezra Klein, supported by ideologically-diverse groups, and implicitly embraced by the likes of Sen. Todd Young (R-Ind.) on certain issues, Abundance has yet to penetrate the policy discourse surrounding artificial intelligence (AI).
As Congress contemplates a potential moratorium on the very kind of state-level regulations that have historically slowed housing development, stifled public transportation projects, and generally impeded strides toward human flourishing, a familiar pattern emerges. Many stakeholders, often with genuine concern, insist that each state must reserve the right to steer AI development, deployment, and use through its own bespoke labyrinth of procedural requirements.
Abundance, therefore, confronts its inaugural, high-stakes test. Is it a talking point? Merely good podcast fodder? Or a message that can live up to its potential to unite Americans tired of a government that seems adept at extinguishing progress more so than providing it with fuel?
More to the point, can Abundance’s outcome-oriented approach to policy cut through the often sensationalistic and fear-driven talking points that have, thus far, largely characterized the AI governance debate? Can a renewed prioritization of effective, smart governance expose the inherent shortcomings of possibly well-intentioned but probably misguided state-level regulatory frameworks? And crucially, can this nascent movement persuade disparate political communities to look beyond immediate anxieties and, instead, rally behind a future enriched by AI acceleration and adoption?
To answer these questions, it’s worth first recalling some essential tenets of the Abundance philosophy, then observing how these principles manifest in proposed AI regulations, and finally, charting a more productive path forward.
At its core, the Abundance framework is built on several critical observations about the nature of procedure. First, additional procedural requirements rarely work as intended. The intricate webs of rules and approvals, often designed to ensure fairness or safety, frequently become tools for obstruction or delay. Only the most heavily invested or ideologically fervent stakeholders typically possess the time, financial resources, and unwavering interest necessary to navigate and dominate these procedural battles. Consider the ubiquitous example of local zoning hearings: homeowners, understandably protective of their immediate environment and property values, are almost always overrepresented. Conspicuously absent are the would-be future residents—the young families, the essential workers, the diverse new voices—who would arguably testify in favor of increased density, more housing options, and, by extension, potentially more affordable living. Their diffuse, future interest cannot compete with concentrated, present concerns in a procedurally heavy system.
Furthermore, governmental authorities themselves may not possess the requisite time, money, sustained interest, or specialized expertise to effectively oversee a complex, multi-layered procedural system. The ambition to regulate often outstrips the capacity to implement. California’s struggle to fully operationalize its California Consumer Privacy Agency, despite the groundbreaking nature of the underlying legislation, serves as a salient, if cautionary, example. The intent was clear and laudable; the procedural and administrative execution, however, has proven immensely challenging, highlighting the chasm that can exist between regulatory aspiration and operational reality.
Second, even when a procedural system functions as planned, more procedure does not necessarily align with, or guarantee, the proffered intended outcome. The aggregate effect of numerous procedural hurdles can, counterintuitively, detract from the likelihood of achieving the very aim the procedures were designed to support. The journey becomes so arduous that the destination is forgotten or becomes unreachable. For instance, while environmental reviews are crucial, the sheer volume of time, money, and expertise poured into generating exhaustive environmental impact statements—which, under the National Environmental Policy Act (NEPA), average a staggering three years to complete for certain projects, with many taking much longer—can significantly limit the availability of those same resources when it comes to actually building the sustainable housing or green infrastructure projects the reports ostensibly facilitate. The process, in a sense, consumes the product.
Finally, and perhaps most critically for a rapidly evolving field like AI, more procedure inherently means more time, more unpredictability as to the final outcome, and significantly increased legal and operational burdens. This trifecta can, and often does, deter a would-be applicant from even initiating a covered project. By way of example, housing developers, facing byzantine approval processes in one jurisdiction, may simply choose to build housing elsewhere, in a regulatory environment that offers greater certainty and speed, even if the initial location held more promise. The chilling effect of procedural complexity is a powerful, if often unappreciated, force.
These dynamics are not abstract concerns; they are beginning to manifest with alarming clarity in the nascent field of AI regulation. Consider a sample of emerging requirements: the RAISE Act, pending before the New York state legislature, proposes audit mandates; the Colorado AI Act, signed into law last year, imposes extensive reporting obligations; and Vermont previously weighed legislation that would set up an entirely new governmental body dedicated to AI oversight. Each of these, viewed in isolation, might seem like an innocuous, even prudent, step. However, viewed through the lens of Abundance, it’s time to adopt a strong presumption of procedure-induced paralysis: a state where the cumulative weight of procedural requirements, compliance burdens, and oversight mechanisms effectively stifles innovation, discourages beneficial applications, and slows the overall progress of AI development to a crawl.
In practice, procedure-induced paralysis means that AI developers, particularly smaller startups or those focused on public interest applications, could face a landscape so daunting that they either abandon projects, relocate to more favorable regulatory climates, or are acquired by larger entities better equipped to handle the compliance overhead. The likely net result if such a patchwork of state-level procedural mazes is enacted is not difficult to predict: fewer AI companies, particularly in jurisdictions with the most cumbersome rules; a chilling effect on the development of AI tools designed for social good, which often operate on tighter margins and with fewer resources for navigating complex legal terrain; and a manifold of missed opportunities—innovations never pursued, problems never solved, efficiencies never realized—the full cost of which will remain tragically unknown.
So, what is the way forward? First, there must be a foundational recognition of the long-term, overwhelmingly positive benefits that AI can offer if it develops within a smart, effective, and efficient regulatory ecosystem. This doesn’t mean a laissez-faire free-for-all, but rather a regulatory approach that is as innovative and adaptive as the technology it seeks to govern.
Second, we must champion outcome-oriented regulation over prescriptive proceduralism. Instead of dictating the minutiae of how AI systems should be designed, audited, or reported on, regulatory frameworks should focus on defining clear, measurable outcomes related to safety, fairness, transparency, and efficacy. This allows for innovation in how those outcomes are achieved, rather than locking developers into potentially outdated or suboptimal processes. Performance standards, risk-based assessments, and sandboxes for experimentation can be far more effective than rigid, process-heavy mandates.
Third, there is a pressing need for greater harmonization and federal leadership to preempt a chaotic patchwork of fifty different state AI regulatory regimes. While states can be valuable laboratories of democracy, the inherently borderless nature of digital technology and AI development means that a fragmented regulatory landscape will inevitably lead to confusion, increased compliance costs, and a race to the regulatory bottom or, conversely, a flight of innovation from over-regulated states. A baseline federal framework, establishing core principles and potentially preempting conflicting state laws in areas of overriding national interest, would provide greater certainty and foster a more competitive national AI ecosystem.
The debate over AI governance is not merely a technical or legal squabble; it is a contest of visions for the future. The insights of Abundance, forged in the frustrating realities of other sectors crippled by procedural inertia, offer a vital corrective. By prioritizing tangible outcomes over procedural purity, and by fostering an environment where innovation can flourish responsibly, we can steer AI towards a future of broadly shared benefits, rather than allowing it to become ensnared in the very regulatory thickets we know can paralyze progress. The first test for Abundance in the age of AI is here; the stakes could not be higher.
Kevin Frazier is an AI Innovation and Law Fellow at Texas Law.