A Symposium for AI Skeptics, AI Believers, and Everyone in Between
We promised you that this symposium would be for AI skeptics, AI believers, and everyone in between, and we hope that you will agree that we delivered.
We promised you that this symposium would be for AI skeptics, AI believers, and everyone in between, and we hope that you will agree that we delivered.
There are plenty of good use cases for AI in government decisionmaking, but sometimes we need to say no. It seems like it’s harder than it should be right now to say no. AI systems are truly remarkable but they are not capable of making values-laden policy decisions. We kid ourselves if we think that a “human in the loop” is more than an impoverished way to think about what agencies owe the public. We can likely make great progress in regulatory policy by letting algorithms into our loop, not the other way around.
This post is the eleventh contribution to Notice & Comment’s symposium on AI and the APA. For other posts in the series, click here. The questions animating this symposium—how administrative law should adapt to the rise of artificial intelligence—are hardly confined to the United States. The United Kingdom, like many other jurisdictions, is grappling with the same […]
To the extent that § 553 attempts to create a process that can improve the substance of regulations, it does so with specifically human intelligence in mind. If agencies over-rely on AI to carry out these procedural tasks, they threaten to undermine § 553’s goal of improving the quality of final rules.
Federal agencies are rapidly expanding their use of artificial intelligence (AI) in government procurement. Much of the public discussion has centered on relatively narrow applications, such as tools that support market research or flag outdated contract clauses. When used to summarize or organize procurement-related information, these tools may pose manageable risks. More complex challenges arise when they extend into discretionary functions, including core evaluative tasks, that federal procurement doctrine presumes a human decision-maker will perform.
As these federal efforts get underway, agencies in D.C. can draw on the successes of their counterparts in Richmond. Though federal regulations and state regulations differ in certain important respects, there are substantial similarities. Here are some of the possible components of a federal AI-empowered regulatory modernization initiative.
Can chain-of-thought prompt engineering solve the black box problem?
There’s a better way to develop administrative common law for these changing times: Rather than accept that this law is something that courts make through an exercise of judicial creativity and policy balancing, we should instead embrace an administrative common law that courts find in prevailing practices.
This post is the fifth contribution to Notice & Comment’s symposium on AI and the APA. For other posts in the series, click here. Among the operative principles of administrative law is the requirement that agencies “examine the relevant data and articulate a satisfactory explanation” for their actions, commonly known as the reasoned decisionmaking requirement. What this […]
Nothing in the APA prohibits agencies from using computational tools to gather, synthesize, or even recommend policy choices—and agencies already often rely on modeling tools to inform regulatory standards. What the APA requires is that the final rule itself be the product of reasoned judgment—supported by evidence, responsive to significant comments, and explained in a coherent manner. If those conditions are satisfied on the face of the rule, it is unlikely that a court reviewing a challenge to the rule will examine how AI was used in the decisionmaking process, perhaps absent some external reason to suspect overreliance on AI. If, on the other hand, the rule ignores key evidence, fails to address major concerns or alternatives, or offers inconsistent reasoning, it will be struck down as arbitrary regardless of whether AI was used in its development.
This post is the third contribution to Notice & Comment’s symposium on AI and the APA. For other posts in the series, click here. Much of the emerging thinking about the relationship between administrative law and generative artificial intelligence is premised, expressly or implicitly, on the assumption that AI systems might come to play a leading role […]
Agencies will not be able to rely solely on today’s most ubiquitous forms of AI—namely, those based on ChatGPT and similar large language models—to avoid their obligation under the APA’s arbitrary and capricious standard to understand the problems they seek to solve, assess alternative solutions against legally relevant criteria, and make some kind of forecast about how these alternatives would change outcomes in the world. Administrators’ forecasts need to be about tangible outcomes, not about plausible-sounding words in sentences, however confidently they might be expressed.
This symposium is for AI skeptics, AI believers, and everyone in between.