A Symposium for AI Skeptics, AI Believers, and Everyone in Between
This post is the final contribution to Notice & Comment’s symposium on AI and the APA. For other posts in the series, click here.
We promised you that this symposium would be for AI skeptics, AI believers, and everyone in between, and we hope that you will agree that we delivered.
Our symposium confirms that, while the speculative phase of government use of AI is over, the full range of AI applications is still coming into view. As we noted in our introductory piece, government agencies are using AI and other algorithmic techniques in adjudicatory and regulatory decisionmaking. In this symposium, we heard from Reeve Bull on the use of AI for deregulation in Virginia and from Joe Tomlinson & Brandan McGurk on the UK government’s use of AI. The Bull and Tomlinson & McGurk pieces are especially interesting to read together because Bull gives us a glimpse of a future in which agencies enthusiastically integrate these tools into their workflows, while Tomlinson & McGurk gesture at the fallout from a staunchly “pro-innovation” approach. This same push-and-pull is a theme throughout the symposium, with many essays grappling with the potential of AI—alongside its challenges.
Another conclusion of this symposium is that the prospect of APA litigation hangs over these developments, and will likely continue to do so for many years. Many essays in this symposium argue that various APA provisions and associated doctrine constrain agency use of AI in government decisionmaking. One of us (Ascher) & John Lewis wrote that the APA’s bar on arbitrary & capricious agency action, its reasoned decisionmaking requirement, its requirement that agencies meaningfully consider public comments, and its disqualification of decisionmakers with unalterably closed minds can serve as meaningful guardrails on the government’s use of AI. In this way, existing administrative law doctrine compels substantive human involvement as a minimum requirement for government use of AI.
In a related point, Cary Coglianese reminds us that there are many ways for AI to be helpful in the rulemaking process, but he warns agencies about relying on AI-generated “bluster” to make policy choices. Such reliance will not suffice, he argues, under the APA’s prohibition on arbitrary and capricious agency action. He raises the idea that “[t]hese models can respond just as matter-of-factly to policy-related questions that they are not capable of answering.” In some contexts, AI’s capability is a technical issue, meaning that it’s possible to push beyond current abilities with additional development. But there are also decisions where, despite the fact that these models might be able to produce words that look a lot like decisions, they are not actually capable of making the values-driven trade-offs needed to make a decision. Tara Aida emphasizes that the “burdensome” and “detailed” nature of regulatory work—the characteristic that makes it a prime target for AI streamlining—is meant to ensure that rulemaking is an intellectually rigorous practice.
Drawing upon the challenges and potential for AI to assist with government procurement, Jessica Tillipman warns agencies not to adopt tools that are “opaque, technically complex, or contractually shielded” because they will produce output without the reasons the APA requires under State Farm and the Federal Acquisition Regulation. Instead, Tillipman advises agencies to insist that AI tools produce outputs that meaningfully preserve, and do not supplant, the agency’s ability to make a decision.
In a more general sense, Gilbert Orbea & Emily Froude offer that courts should apply more scrutiny to agency AI use “when agencies use it in implementing broad statutory mandates, substantively drafting or producing analysis, or regulating rights- or safety-impacting domains,” and vice versa. One of us (Dooling) argued that AI-drafted justifications for regulations cannot be lawful, because they breach the decisionmaker’s duty to engage in reasoned decisionmaking.
One of many lingering questions is how we will know if agencies are using AI to draft their regulations or other final actions. For Jack Jones & Burçin Ünel, the answer is that we probably won’t. They remind us that if the APA’s requirements for reasoned decisionmaking “are satisfied on the face of the rule, it is unlikely that a court reviewing a challenge to the rule will examine how AI was used in the decisionmaking process, perhaps absent some external reason to suspect overreliance on AI.” For Jones & Ünel, the key is agency staff savvy with the tools they are using and ensuring that “AI remains an analytical assistant with human oversight, not an autonomous decisionmaker.” These are matters of internal agency practice, though, which are likely to be fluid and subject to different approaches, including resistance, experimentation, and enthusiastic embrace.
Jones & Ünel also write that AI’s inscrutability is in tension with requirements for reasoned decisionmaking. Elliot E.C. Ping assesses the potential of chain-of-thought prompting, which directs an AI system to attempt to demonstrate the intermediate steps in its process, to solve this inscrutability (or “black box”) problem. (If we lost you here, check out Ping’s piece for a plain language explanation.) Ping surveys the results to date and finds that chain-of-thought prompting is an interesting idea that is “still in its infancy and likely won’t be the hero we need to resolve the AI explainability crisis.” Alas—though, as anyone who spends time prompting large language models knows, she is correct.
Turning to the APA’s procedural requirements, Aida points out that § 553 was imposed on agencies with the expectation that people would implement it. She finds evidence for this in the APA’s pre-AI enactment, but also in the APA’s requirement for a reflective notice-and-comment rulemaking process. Such reflection and reason-giving for adopting or rejecting ideas is quintessentially human, Aida argues. Letting AI lead such a process would result in “quite superficial” engagement with the process steps of § 553. In a similar vein, one of us (Dooling) has written about “writing as thinking” and the risks of delegating writing tasks. And Coglianese referenced Sierra Club v. Costle, 657 F.2d 298 (D.C. Cir. 1981), to point out that the “ultimate responsibility for the policy decision remains with the agency rather than the computer.”
Maybe it isn’t surprising that a symposium entitled AI and the APA generated a lot of ideas about how the APA and related administrative law doctrines might be read to impose legal constraints on agency AI use. To a hammer, everything is a nail; to a lawyer, every open question is a potential legal claim. But one theme that emerged even amid that creative thinking is uncertainty about the extent to which the APA can, should, or will be adapted to this convulsive technological change. For instance, Jones & Ünel pointed out that judicial review generally operates “on the face of the rule”—which might keep under-the-hood AI usage out of courts’ view. Similarly, Coglianese suggests that AI usage will principally be legally relevant where it degrades the actual quality of an agency’s work. But this symposium certainly did not reach consensus on whether the use of AI is itself a legal issue.
In that vein, Adam Crews provides a dose of legal realism, warning us of overreliance on existing administrative common law to navigate the challenges of AI in government decisionmaking. For example, an advocate attempting to use Portland Cement v. Ruckelshaus, 486 F.2d 375 (D.C. Cir. 1973), to require an agency to disclose its AI usage, as one of us (Ascher) has written about, might instead receive a decision that overturns Portland Cement. He argues that courts have changed significantly since the heyday of administrative common law in the 1960s-70s, when courts saw themselves as “partners in the administrative process.” Instead, Crews thinks that today’s textualist courts will simply ask: “But where does the statute say that?” Well, that’s depressing. How might one challenge risky uses of AI without jeopardizing administrative common law? Crews argues that we should encourage courts to conceive of themselves as locating and supporting emergent administrative practices by giving agencies time to try different approaches and settle into a set of customs for AI usage. But what happens in the meantime, and what if problematic norms emerge from an already-stretched-thin civil service?
Tomlinson & McGurk’s essay suggests that a kind of iterative approach is already happening in the UK. They map the UK’s evolution from “a hands-off, ‘pro-innovation’ strategy—explicitly eschewing regulation of governmental AI use” to the green shoots of legal policy that recognizes the risks of unchecked innovation.
One point that nearly every piece was quick to admit is that AI is undeniably becoming a very useful technology. That is, while the “believers” and the “skeptics” may take different postures towards AI, with different views about what uses are appropriate and what best practices ought to prevail, there is a growing consensus that AI has—and certainly will have—its place in the administrative process. At one end of the spectrum, Bull takes a (forgive us) bullish view of LLMs’ capacity to draft complex, inter-textual legal analyses that can streamline regulatory review and reform. At the other end, one of us (Dooling) insists that both democratic and cognitive processes require that agency analysis and reasoning—not just ultimate decisionmaking—must remain the province of the human mind. A mere “human in the loop,” a paradigm that several other pieces identify as an essential element of agency AI usage, is “an impoverished way to think about what agencies owe the public.” Even so, Dooling sees “plenty of good use cases for AI in government decisionmaking”—so long as we “let[] algorithms into our loop, not the other way around.”
There are so many more open questions. Is the underlying assumption of administrative law that humans are the ones making the decisions and giving the reasons a legal requirement? Do agencies need to explain how their personnel collectively reasoned through a policy problem or are facially plausible justifications for a particular action sufficient? Even assuming that administrative law governs how agencies use AI, how will courts assess compliance in light of the various rules—including the presumption of regularity and the principle that judicial review of agency action is based on the administrative record—that limit the extent to which courts can peer behind the veil to appraise the adequacy of the agency’s procedures? Will agencies be forthcoming about their use of AI, and if not, are there means by which the public may force such disclosure? Will LLMs—advanced and fluent as they have become in certain respects—really be up to the complicated task of writing rules? On some level, it seems entirely right that the administrative law community will have to muddle through these developments, just as we have so many others.
Bridget C.E. Dooling is Assistant Professor of Law at The Ohio State University and Jordan Ascher is Policy Counsel at Governing for Impact.

