Notice & Comment

Artificial Intelligence, Modernizing Regulatory Review, and the Duty to Respond to Public Comments, by Eli Nachmany

*This post is part of a symposium on Modernizing Regulatory Review. For other posts in the series, click here.

President Joe Biden’s recent Executive Order on Modernizing Regulatory Review makes explicit mention of the potential for artificial intelligence (AI) to impact the public comment process in agency notice-and-comment rulemaking. But the Biden Administration needs to be careful how it implements the executive order—certain government uses of AI in the public comment process could undermine the goals of the Administrative Procedure Act (APA) and frustrate judicial review of agency action.

This essay picks up on a thread from a terrific podcast—starting around the 26:48 mark—that Alan Rozenshtein did with Mark Febrizio and Bridget Dooling for Lawfare on “Robotic Rulemaking.” (Febrizio and Dooling wrote a fascinating essay on the topic for Brookings, too.) In the podcast, Dooling contended that the use of AI could displace true agency consideration of public comments.

In Section 2(d), the Executive Order provides as follows: “The Administrator of [the White House Office of Information and Regulatory Affairs (OIRA)], in consultation with relevant agencies, as appropriate, shall consider guidance or tools to modernize the notice-and-comment process, including through technological changes. These reforms may include guidance or tools to address mass comments, computer-generated comments (such as those generated through artificial intelligence), and falsely attributed comments.”

This provision appears to follow from the Administrative Conference of the United States’ Recommendation 2021-1document, which was entitled “Managing Mass, Computer-Generated, and Falsely Attributed Comments.” Over the last decade or so, mass comment campaigns have somewhat gummed up the works for agencies; the executive order opens the door for OIRA to leverage the power of technology to contend with the recent explosion of public comments that the digital age has facilitated.

The question of how exactly the White House will (and can) go about addressing this novel issue is worth studying in further depth, given the prevailing legal framework. Perhaps the White House will resolve to fight fire with fire and issue guidance that formally incorporates AI into not only sorting comments but also responding to them. But if agencies outsource the tasks of comment-responding and administrative record-generating to AI, they might come into conflict with the requirements of the APA—at least as the Supreme Court has interpreted those requirements.

The Duty to Respond to Public Comments

In Perez v. Mortgage Bankers Association, the Supreme Court—interpreting the APA—remarked that “[a]n agency must consider and respond to significant comments received during the period for public comment.” (Bracket whether that requirement comports with the Court’s decision in Vermont Yankee v. NRDC, which stands for the proposition that federal courts may not impose procedures on agencies in excess of the procedural requirements set forth in the APA.) Thus, the current state of law appears to be that a court must set aside a rule as procedurally invalid if the agency does not adequately consider and respond to significant public comments. Against the backdrop of arbitrary-and-capricious review of agency action, federal courts are looking to “ensur[e] that the agency has … reasonably considered the relevant issues and reasonably explained the decision,” as the Court explained the APA’s mandate in Prometheus Radio Project v. FCC.

Summarizing circuit court cases, Donald Kochan surmises that an agency’s response to significant comments must “be coherent and substantive,” and he notes that these responses facilitate judicial review—toward the end of determining “what major issues of policy were ventilated … and why the agency reacted to them as it did.” At bottom, Kochan argues, “[t]he commenting process and the duty to respond impose discipline on agencies, ensuring that they do not miss analyzing important issues.”

The APA’s requirement of notice and comment in the rulemaking process assumes that public comments will sharpen an agency’s consideration of the relevant issues before the agency finalizes a regulation. Whether the modern practice of notice-and-comment rulemaking embodies that ideal is uncertain. Jonathan Weinberg has opined that “the government’s obligation to hear, engage, and respond [to public comments] … puts governors and governed in a discursive relationship. It compels the state to engage in communicative, reason-based discourse rather than the mere exercise of power.” Weinberg lauds “[t]he government’s obligation to show respect” and “treat commenters as democratic citizens rather than as objects of paternalistic control” as being “at the heart of the right to be taken seriously and its democratic bona fides.” Yet Weinberg laments that agencies in the modern day “are often swamped by comments and pay serious attention to only some of them. They attend to those comments filed by repeat players with instrumental power and may send the rest off to outside contractors to be ignored.”

A Hypothetical to Illustrate the Issue

Now, imagine if an agency employed the ultimate outside contractor—an AI language model—to synthesize all of the public comments on a given proposed rule and write up an automated response to these comments for the administrative record. Consider a hypothetical: The Environmental Protection Agency (“EPA”) proposes, via a notice of proposed rulemaking, to promulgate Regulation X (setting an emissions standard) pursuant to its authority under Statute Y to set an emissions standard that is “appropriate to protect public health and safeguard environmental quality.” The notice includes a detailed cost-benefit analysis. The agency sets a 60-day period for public comment. Comments flood in from all corners, including grassroots organizations, industry stakeholders, and everyday Americans. A not-insignificant number of these comments are duplicates (part of some mass comment campaign that an advocacy group organizes) and some of the comments are even themselves AI-generated.

Still, the commenters raise various concerns. The grassroots organizations question whether the agency adequately took certain intangible benefits of public health—like increased human happiness—into account. These groups want a more stringent emissions standard. Industry stakeholders ask for a more lenient emissions standard, pointing out that the agency’s cost estimates in its cost-benefit analysis were simply too low for a variety of reasons. Everyday citizens raise all sorts of other issues, from economic anxieties to fear of climate change.

Faced with this barrage of comments, the EPA merely plugs them all into an AI prompt alongside the proposed rule and gives the following command: “Defend the proposed rule against all of the most significant contentions raised in these comments. Make the best arguments why the proposed rule should not change.” The AI model dutifully spits out a long narrative about why the rule is appropriate—the narrative is mealy-mouthed in certain places, and it is frustratingly general in others, but it ultimately rests on the vague nature of the discretionary “appropriate” standard to justify the rule. (As one article has described ChatGPT’s performance on a sample tort law exam: “[I]n response to a policy question … asking for law-and-economics critiques of tort cases, ChatGPT merely described cases at a high level of generality in ways that superficially mentioned economics but did not engage with prominent law-and-economics concepts, like shifting liability to the least-cost avoider or spreading losses to limit concentrated risk.”) An EPA official reads over the narrative for glaring errors—finding none, the official enters the narrative unedited into the final rule, which the agency publishes without any modification to the initially proposed rule. No one from the EPA ever actually reads the comments that agency staff aggregated for the AI language model.

Rulemaking and Reason-Giving

Has the agency in the hypothetical sufficiently considered public comments and responded to them? The record would reflect such consideration and include detailed—if general—responses to all of the most significant comments (assuming AI is even capable of determining which comments are “significant”—not the most straightforward task). But one would be hard-pressed to conclude that the agency truly engaged with the comments and thought seriously about how the objections raised therein might counsel revision of the proposed rule.

Although the AI model was able to spit out a responsive wall of text, no one at the agency actually read the comments and grappled with their arguments while drafting the final rule. The record responds to the comments, but the agency—more specifically, a human being at the agency—has not actually analyzed the issues that the comments raise. A reviewing court could not be certain that the agency gave adequate consideration to the comments, even in spite of a record appearing to say that the agency did.

The strongest counterargument is this: Someone at the agency ratified the AI model’s report. AI can pump out a narrative, but someone—some human being—at the agency has to be the one to hit “send” on the final rule. Even if that person did not draft the record, the argument goes, the AI write-up is only in the final rule because a person deemed the language model’s output satisfactory. But on the Lawfare podcast, Dooling contended (at 27:42) that “the writing is the thinking,” analogizing these concerns to possible issues with the use of contractors in rulemaking. In Dooling’s words (at 29:10), “it is a little bit of a fallacy that, at the last day, when the head of the agency signs the rule, that that’s when the decision got made. The decisions often get made well upstream from that in the drafting process.”

A Theory of Equilibrium

In 2011, Orin Kerr published an article in the Harvard Law Review entitled “An Equilibrium-Adjustment Theory of the Fourth Amendment.” Kerr’s main thesis is that the development of Fourth Amendment law has been an exercise in equilibrium-adjustment—“a judicial response to changing technology and social practice.” As Kerr writes: “When new tools and new practices threaten to expand or contract police power in a significant way, courts adjust the level of Fourth Amendment protection to try to restore the prior equilibrium.“

Maybe judicial permission of the use of AI to respond to comments would be an exercise in APA equilibrium adjustment. The public is likely going to start turning to AI to create voluminous and sophisticated comments. Perhaps the government should be able to turn to AI as well to respond to these comments in a more time efficient manner. Yet the equilibrium-adjustment theory is an uncomfortable fit in this context. Public comments are not merely a time-consuming nuisance to be managed. The goal of the Fourth Amendment may be to balance privacy with law enforcement, and these two values are certainly often in conflict. But the goal of the public comment process is not to strike some sort of balance—it is to improve agency decisionmaking by sparking deeper consideration of the issues that the proposed rule sought to address. Any reforms, including the use of AI, should have this goal in mind.

To that end, AI may well have a place in the government’s approach to the modern public comment process: for example, sorting through mass comment campaigns and summarizing these comments for agency officials seems to be a defensible resource allocation decision that would allow the agency to consider comments. The Biden Administration—to the extent AI usage is one of the “technological changes” that the White House is considering “to modernize the notice-and-comment process”—should be careful to distinguish between optimizing resource allocation and eschewing careful consideration of comments. The former is a worthwhile use of a new technology toward APA-consistent ends; the latter would appear to undermine the essence of the APA’s notice-and-comment scheme.

Final Thoughts

We have entered uncharted waters, and our prevailing legal frameworks are doing their best to provide direction. The executive order calls on the OIRA administrator to “consider guidance or tools to modernize the notice-and-comment process, including through technological changes.” One technological change that could modernize the process is the incorporation of AI into responding to public comments. But using AI in this way might contravene the requirement that agencies consider significant public comments before publishing final rules. In the ideal APA universe, someone at the agency reads a well-reasoned comment, struggles with its arguments while attempting to write a response, suggests a modification to the proposed rule in response to the comment, and secures a change to the rule before the agency finalizes the regulation. Perhaps that ideal does not reflect modern practice, but it nevertheless highlights the potentially acute dangers of outsourcing comment response to AI.

Eli Nachmany is a law clerk to Judge Steven J. Menashi of the U.S. Court of Appeals for the Second Circuit. This essay represents the views of the author alone.

Print Friendly, PDF & Email