Seeking Disclosure of AI Usage in Agency Rulemaking, by Jordan Ascher
The Trump Administration has shown great interest in using artificial intelligence tools in governance. It has reportedly used AI to, among other things, evaluate federal workers’ responses to the government-wide “Fork in the Road” email, “munch” agency contracts it viewed as nonessential, and assist General Services Administration staff—and it plans to “accelerate” AI usage across the government. That could include using AI to aid in the rulemaking process. Indeed, shortly after the 2024 election, Elon Musk and Vivek Ramaswamy proposed using AI to identify regulations for rescission as part of “the DOGE plan to reform government.”
Agencies could conceivably use AI at numerous points in the rulemaking process, raising a host of legal and policy questions. AI tools of various stripes could be used to identify subjects for rulemaking or regulations in need of revision or elaboration. Machine learning tools could aid agencies in setting regulations’ substantive content by, for instance, identifying patterns in voluminous data. And generative AI could even be used to draft rule preambles and respond to public comments. Proponents anticipate that such uses could increase the effectiveness of public policy, bolster the technical capacity of regulators, or facilitate public participation. Skeptics fear they could displace human discretion and decisionmaking, decrease transparency, produce biased and arbitrary results, or compromise privacy and data governance safeguards. Whatever one thinks about the benefits and risks of using AI in rulemaking, and the legal rules and best practices that should apply, whether and how agencies are actually deploying AI tools is a matter of substantial public importance.
The public therefore should seek disclosure of how agencies are using AI. Congress has taken a first step, requiring agencies to publish AI “use case” “inventor[ies],” which the Office of Management and Budget compiles. But the public arguably has the right to additional disclosure in particular rulemakings under the Administrative Procedure Act. The APA’s notice-and-comment rulemaking provisions have long been understood to require agencies to disclose the “critical factual material” underlying proposed rules, including the “assumptions and data” on which agencies have relied. Similarly, pursuant to the APA’s reasoned decisionmaking requirement, “[w]hen an agency uses a computer model, it must explain the assumptions and methodology used in preparing the model.” Such disclosures are “[t]he safety valves in the use of . . . sophisticated methodology.”
Public policy also favors agency disclosure of AI usage. Congress recognized as much in requiring agencies to make public how they are using AI. The Administrative Conference of the United States has observed that “[a]gencies’ efforts to ensure transparency in connection with their AI systems can serve many valuable goals,” and it therefore recommends that “agencies might prioritize transparency in the service of legitimizing its AI systems, facilitating internal or external review of its AI-based decision making, or coordinating its AI-based activities.” For its part, the White House recently emphasized that agencies, in using AI, must “provide improved services to the public, while maintaining strong safeguards for civil rights, civil liberties, and privacy.” This all makes good sense. Disclosing AI usage allows the public to confirm that agencies are adhering to relevant laws, apply technical expertise to improve agencies’ use of technology, assess the risk that federal policies might be influenced by biased or otherwise faulty methods or products, and learn about an emerging and important field of technology.
One way members of the public can enforce these requirements and principles is by submitting comments on agencies’ proposed rules requesting that the agencies disclose whether and how they used or intend to use AI in the rulemaking process. My organization, Governing for Impact, recently submitted such a comment on the Office of Personnel Management’s “Schedule Policy/Career” proposed rule. We asked:
- Whether AI was used in the rulemaking and, if so, how;
- What AI product the agency used and how it was selected;
- How the product was procured;
- Whether it was fine-tuned for a particular purpose or use;
- What categories or sets of data it was trained on;
- What prompts or inputs the agency used to elicit responses or outputs;
- What outputs the AI product produced;
- How agency staff used those outputs;
- What quality control and validation agency staff performed on the outputs;
- What measures the agency took to ensure that its usage of AI complied with applicable data security and privacy requirements;
- Whether and to what extent persons and entities not employed by the agency developed, modified, provided access to, or used AI as part of a rulemaking.
To be sure, it is not obvious that every way in which an agency might use AI would be subject to the APA’s disclosure requirements as presently understood (we will be releasing a paper addressing that issue, among others, soon), but agencies are, at least, likely obligated to provide a reasoned response to comments on their rules that raise legal and policy arguments for disclosure.
There are many open questions about how AI can and ought to be used in rulemaking and administrative decisionmaking generally. Disclosure of how agencies are, in fact, using AI will inform discussion of those questions. A commenting strategy is, of course, only one way in which the public might push for such disclosure, but it would be worthwhile to pursue.
Jordan Ascher is a Policy Counsel at Governing for Impact.