Technology has promised big benefits for public participation in notice-and comment rulemaking, but it has also presented new challenges for agencies. For example, as the number of rulemaking comments has increased, so too has the administrative burden of reviewing those comments. Can an agency use computer software to help ease this burden while still discharging its obligations under the APA? Melissa Mortazavi addresses this question in a terrific essay, Rulemaking Ex Machina, which is forthcoming in the Columbia Law Review Online.
Mortazavi’s essay offers a concise and thoughtful discussion, urging policymakers to be deliberative and judicious in approving the use of computer automation in rulemaking. She prudently argues that agency personnel must do the core work of policymaking, using computer assistance only for supplementary tasks. “For example,” she explains, “removing duplicate submissions, where truly identical, appears to be time saving with little substantive loss.” Chores like this could be done using comment analysis software without violating the APA’s requirements.
The Administrative Conference of the United States (ACUS) reached much the same conclusion in 2011 as part of a larger project examining various legal issues raised by electronic rulemaking. When ACUS closed its doors in 1995, the Internet was new and federal employees were just starting to use email. So when the agency got up and running again in 2010, giving agencies some guidance on how the APA might apply to e-Rulemaking was a top priority. (I worked on the project in my capacity as Staff Counsel to the Committee on Rulemaking.)
In Recommendation 2011-1, Legal Considerations in e-Rulemaking, ACUS took the view that the APA, despite its 1946 enactment, offers sufficient flexibility to accommodate e-Rulemaking. With respect to the question of comment analysis software, Paragraph 1 recommended that:
Given the APA’s flexibility, agencies should: (a) Consider whether, in light of their comment volume, they could save substantial time and effort by using reliable comment analysis software to organize and review public comments.
(1) While 5 U.S.C. § 553 requires agencies to consider all comments received, it does not require agencies to ensure that a person reads each one of multiple identical or nearly identical comments.
(2) Agencies should also work together and with the eRulemaking program management office (PMO), to share experiences and best practices with regard to the use of such software.
This modest recommendation, like Mortazavi’s essay, strikes a careful balance between assuring appropriate agency review of comments while embracing the use of computer assistance to reduce the burden of truly duplicative comments. The point about reliability, which I have emphasized above, was key to ACUS’s approval.
In evaluating whether this approach makes sense, it is also worth considering the non-computerized alternatives. During the Committee on Rulemaking’s public meetings on Recommendation 2011-1, the point was made that some agencies have (for example) hired college students working on a hourly rate to help review comments when there have been too many for agency staff to read without assistance. If that is the alternative, perhaps the better option is for agencies to use reliable computer analysis software to identify and group identical comments, thereby enabling agency personnel to conduct the review themselves.
This post is part of the Administrative Law Bridge Series, which highlights terrific scholarship in administrative law and regulation to help bridge the gap between theory and practice in the regulatory state. The Series is further explained here, and all posts in the Series can be found here.