Agencies, Not Courts, Should Develop Administrative Common Law for AI, by Adam Crews
This post is the sixth contribution to Notice & Comment’s symposium on AI and the APA. For other posts in the series, click here.
Although the Administrative Procedure Act (APA) has been around for nearly 80 years, much of administrative law doctrine comes from the courts rather than the statute. For decades, scholars have recognized the prevalence of this “administrative common law,” and many embrace it. After all, the APA was written to address matters that were salient in 1946, so the statute says relatively little about issues like rulemaking, which gained prominence several decades later. Much of today’s administrative common law—including various judicial glosses on APA § 553’s rulemaking requirements—emerged as courts tried to solve the perceived problem that rulemaking was otherwise too prone to abuse and regulatory capture. But administrative common law doesn’t need to come top-down from the courts; it can (and perhaps should) emerge from agency practices rather than judicial creativity. Artificial intelligence’s arrival to the administrative state presents the opportunity to make that shift.
On its face, the APA is not well equipped to handle AI. We can stare at the statutory text all day and it won’t give us a clear answer to some of the questions that might come up in litigation: May agencies prompt a model to review a record and arrive at a predetermined outcome, or does that violate a supposed obligation to approach rulemaking with an open mind? Or, do agencies need to disclose details about their algorithms (and their use) as the Portland Cement doctrine requires them to disclose certain facts on which they rely in promulgating a rule? Thoughtful commentators have already flagged questions like these as among those looming in the new, AI-assisted world of administration.
If litigants ultimately raise those questions in court, they will likely fall back on existing judge-made administrative common law for support. Many of these doctrines police not only the substantive reasonableness of agency actions but also the procedures followed, like whether agencies disclosed enough information and responded to enough comments to make public participation “meaningful.” It’s easy enough to take those existing precedents, distill some general principles, and then ask courts to use them to impose new, judge-made guardrails around AI’s use in rulemaking.
That approach is precarious, however, for a purely pragmatic reason: The prevailing judicial methodology today is simply different from what it was in administrative common law’s heyday. The canonical circuit court cases that gave us procedural administrative common law in the 1960s and 1970s were based on a view of courts as partners in the administrative process, charged with furthering supposed congressional intent and protecting individual procedural rights. But we’re all textualists now, and even progressive scholars have pointed out that many of our judge-made procedural doctrines are hard to square with the APA’s text. That’s especially true once you consider Vermont Yankee’s rule of deference to agency procedural choices, which the current Supreme Court is happy to deploy broadly. As a result, any effort to extend old procedural precedents is bound to invite the question: But where does the statute say that? And if you don’t have an answer, you’re likely to run into trouble—just ask the open-mindedness test, which the Supreme Court breezily dismissed as not required by the APA and therefore inconsistent with Vermont Yankee.
It’s not hard to see how this change in thinking about the law could doom arguments for extending administrative common law’s procedural precedents, or even destroy them altogether. Let’s take Portland Cement as an example. As first articulated by the D.C. Circuit in 1973, the doctrine holds that it is “not consonant with the purpose of a rule-making proceeding” to promulgate rules based on data “known only to the agency.” As elaborated in ensuing decades, Portland Cement now requires advance disclosure of any “critical factual material,” such as “staff reports” and “technical studies” in the agency’s possession and on which it relies. The big idea is that the public should have the opportunity to engage in a “genuine interchange” with the agency about the accuracy of these materials prior to the rule’s promulgation.
From a purely doctrinal perspective, one might plausibly situate some uses of AI into the Portland Cement doctrine. Using an AI tool in rulemaking to identify significant public comments that require a response—and perhaps even to propose suggested responses—would likely not rise to the level of “critical factual material” uniquely in the agency’s possession. After all, public comments are already out in the open, and the D.C. Circuit has rejected arguments that Portland Cement and its progeny require “transparent explanations” about the agency’s behind-the-scenes work with the rulemaking record. By contrast, asking AI to serve on the front end as (essentially) an automated agency official and generate the equivalent of a “staff report”—making forecasts, or suggesting grounds for new regulation—is plausibly within Portland Cement’s reach, at least as to the report’s substance. But if litigants go further, perhaps arguing that the tool’s inputs or prompts are part of the information with which they should have the opportunity to quibble during the comment period, we quickly find ourselves extending Portland Cement to a new frontier.
Every new extension of Portland Cement, however, is a new opportunity to bring the whole thing toppling down. Indeed, Justice (but then-Judge) Kavanaugh has already opined that “the Portland Cement doctrine cannot be squared with the text of … the APA,” nor with the “landmark Vermont Yankee decision.” The more that litigants press on the doctrine, the more likely the Supreme Court might intervene and take it away altogether.
Although I’m skeptical that doctrines like Portland Cement are the solution, I still think that it’s possible to arrive at sensible rules for AI transparency during rulemaking. I just don’t believe that judges—who, frankly, are unlikely to understand this stuff better than anyone else—should fashion this next wave of administrative common law by their own lights. And in a forthcoming paper, I argue that there’s a better way to develop administrative common law for these changing times: Rather than accept that this law is something that courts make through an exercise of judicial creativity and policy balancing, we should instead embrace an administrative common law that courts find in prevailing practices.
What do I mean by this? For centuries, a defining feature of common law judging was ascribing legal effect to customary practices that emerged on their own among the regulated community. A “custom” is essentially a pattern of behavior that is self-perpetuating and valued for its own sake. Where a particular community does business a particular way out of a sense of normative obligation (separate from a posited legal command), that community has created a self-regulating customary practice. And when judges can identify those customs, there is often some sense in applying them as the benchmark against which legality is measured. Widespread adoption is some evidence of popular consent (suggesting democratic legitimacy), and one might think that customs emerge from a place of collective wisdom (suggesting the right balance between competing demands). And as I argue in the paper, there are reasons to think that the APA affirmatively embraces this custom-based approach to administrative common law: APA § 559 preserves “additional requirements … recognized by law,” and ascribing legal effect to custom is a traditional way in which the law recognized legal requirements, including when construing statutes imposing procedural obligations on the government. In short, one plausible way to understand the APA is as accepting administrative common law from the bottom up (as agencies coalesce around a set of best practices) while rejecting it from the top down (as when the D.C. Circuit imposed Portland Cement by judicial decree).
There are already some ways in which administrative common law reflects customary practices. Consider the length of a public comment period. APA § 553 does not hand down a brightline rule, but the Administrative Conference of the United States (ACUS) has long recommended nothing shorter than 30 days, which is now something of the customary norm. Courts followed suit, pointing to ACUS’s guidance as a benchmark for evaluating the sufficiency of a comment period. And so today, a fair statement of the common law rule that courts enforce is that 30 days is generally the shortest comment period allowed. But that rule is grounded in customary practice, not judicial policy balancing or elaborate guesswork about what Congress would have wanted (but failed to say) back in 1946.
We can do the same thing for transparency around AI in rulemaking. ACUS is already hard at work studying current and potential uses of AI in administrative processes. In time, we might expect to see a set of customary practices arise for AI disclosure and related matters, informed in large part by a careful study of the issues, deliberative weighing of costs and benefits, and persuasive recommendations for best practices. Once we have a custom, that should be the baseline against which we measure the sufficiency of a particular agency’s AI disclosure. In the meantime, courts should sit back and give agencies some leeway to figure this stuff out on their own, rather than shoehorn a new problem into an old made-up doctrine.
Adam Crews is Assistant Professor at Rutgers Law School.

