The administrative state is undergoing a slow but radical transformation. Three seismic shifts are occurring in the administrative state’s set of procedures for deciding when and how to regulate. First, Biden regulators have emphasized the need to evaluate individual regulations on a wider range of criteria, including distributive effects, fairness, and quality of life. The likely result of this is “procedural layering,” whereby additional concerns are tacked on in thinking about an individual rule. Given previous instructions to agencies to consider effects on small businesses and paperwork burdens, procedural layering is not entirely a new phenomenon, yet it looks poised to continue, with the Biden White House recently adding further instructions to consider competitive effects.
Second, concerns have been growing about the ability of agencies to be adaptive to new circumstances and regulatory problems. In one sense, this is a fear that there are too many regulations, and agencies are not doing enough to roll back now-undesirable rules. Dating back to the Obama era, the federal government has expressed a desire to consider the “cumulative effects” of regulations. This trend only accelerated with Trump. First, the Trump administration enacted the “One-In, Two-Out” rule in Executive Order 13,777, a type of “regulatory budgeting” that has received attention in recent years. Individual agencies under the Trump administration also took action; in the Trump administration’s last days (in a classic example of a “midnight” regulation), the HHS issued the draft “sunset rule” which would have amended nearly all the approximately 18,000 HHS regulations to include self-executing sunset provisions. Administrative law scholars Bethany Noll and Richard Revesz have documented the Trump White House’s use of tools, such as the Congressional Review Act, abeyances in pending litigation, and suspensions of final regulations, to roll back regulations more aggressively than previous administrations did.
The third change is the current Biden White House focus on whole-of-government approaches. The Biden administration has since rescinded “One-In, Two-Out” and the draft HHS rule, and has rather been concerned that “regulatory ossification”—whereby burdensome rulemaking procedures bog down the rulemaking process—has hampered the ability of regulators to make new rules. Reflecting these different concerns, the Biden administration has looked to impose a “whole-of-government” mandate in several different areas, such as climate regulation, financial regulation, and competition. Such mandates instruct a diverse collection of agencies to act in concert to tackle a particular regulatory problem and reflect an attempt to spur collective action where individual agencies might be unable to take sufficient steps alone. They also seek to bring a more holistic approach to rulemaking; regulatory programs should be pursued and evaluated collectively.
Why are such shifts happening? In the last fifty years, as the regulatory demands on the state grew, agencies developed toolkits and methods for parsing through and analyzing regulations. In particular and much heralded, was the rise of cost-benefit analysis as the dominant tool for evaluating the desirability of a rule. At least in part, successive administrations enacted these changes out of a hope that more precise, evidence-based rulemaking would separate valid concerns from partisan handwringing emerging from both sides of the aisle. In the words of one foremost administrative law scholar, “CBA can have a salutary ‘cooling effect’” on the rulemaking deliberative process. Rulemaking could thus be less reactionary, more cool, more rational. So what gives? Why are we still stuck with intractable disputes about the very procedures underlying rulemaking?
What’s more, can we distill a common thread from these changes? Both the Biden Administration approach and the Trump Administration approach mark a significant shift in how we evaluate rules. There is a feeling that agencies should be looking at the bigger picture, and dissatisfaction with the current piecemeal evaluation of rules—usually through Cost-Benefit Analysis—is growing. In the absence of a systematic approach to these questions, the discussion on these questions has been limited to competing partisan visions. The conservative concern is mostly about the cumbersome effect of too much regulation (although some left-wing scholars have come to accept this as a concern). Left-wing advocates, on the other hand, are concerned that when it comes to certain policy priorities, regulators are not doing enough. Can we possibly reconcile and settle these views?
In answering these questions, I summarize a taxonomy that I develop in the paper “Broad Optimality in Agency Rulemaking.” I argue that agencies and the scholars who discuss agency rulemaking should have a formal conception of “broad optimality” to help guide in their efforts to reform the administrative state. Whereas “narrow optimality” looks at individual regulations and asks whether they are the best choice out of a limited set of alternatives, “broad optimality” looks at the overall behavior of an agency (over the course of a regulatory program or a period of time) and asks whether they could have done better on the whole. Current procedures, designed as they are for narrow optimality, come at the expense of being analytical about the broader view.
There are at least three different ways agencies fail to achieve “broad” optimality, even when they otherwise meet the narrow standard. These are that (i) an agency might forego regulations (regulatory or deregulatory) that it should make, (ii) an agency might make the wrong rulemakings because it does not account for interdependencies, or (iii) an agency might neglect the procedural value of its rulemakings. I analyze each in turn.
The first shortcoming is that agencies forego regulations they should pass. Anytime an agency has the opportunity and authority to regulate, there are many ways it could do so—the “space” of possible regulations is almost always very large compared to how much an agency can regulate given severe constraints on its time and resources. As a result, agencies ignore potential regulations they should have paid more attention to. In certain cases, they may even fail to cognize that such regulations are possible.
Agencies also forego beneficial deregulatory or modifying rulemakings. That is, they fail to deregulate or modify rules when they should. Due to changing circumstances (including the passage of other rules), or new information learned by the agency, rules that were once optimal may no longer be. An agency that does not adjust to its new circumstances will thus not live up to a standard of broad optimality.
Concern about foregone regulation is shared by both sides of the aisle. Concerns about foregone deregulatory rulemakings form the rationale behind the earlier-discussed regulatory budgeting movement. According to its proponents, agencies, for various reasons, don’t deregulate when the facts dictate that they should. Obsolete rules remain on the books. Thus, according to such a view, agencies must be induced—through regulatory budgeting or otherwise—to reevaluate rules that have become redundant since their promulgation. Similarly, the Biden administration has tried to induce more aggressive rulemaking on issues such as competition and climate via Executive Order—again, out of a concern that agencies left to their own would forego beneficial regulations in such areas.
Second, agencies commit “interdependency error.” Interdependency error occurs when agencies, in analyzing regulations, fail to consider the potential interplay of regulations. For example, when conducting cost-benefit analysis (CBA) on rules A and B, agencies compare the net benefits of A (or B) compared to a baseline with neither, but may fail to consider what happens when they pass both A and B. This may lead to error—it may be that the benefits of A or B are higher (or lower) in the presence of the other regulation. In the case where the net benefits of a regulation are higher in the presence of other regulations, we say that they exhibit positive interdependencies. In the case where the net benefits are lower, we say that they exhibit negative interdependencies.
For example, as discussed in the EPA’s Guidelines for Preparing Economic Analysis, the EPA’s Clean Air Interstate Rule (CAIR) and Clean Air Mercury Rule (CAMR) are two interdependent rules. The EPA promulgated both in 2005 to mitigate pollution created by coal-fired power plants. CAMR sought to reduce mercury emissions. In the meanwhile, the control technologies private firms would use to comply with CAIR—which was promulgated to reduce sulfur dioxide (SO2) and nitrogen oxides (NOx)—also incidentally reduced mercury emissions. Thus, a negative interdependency existed between the two rules: the benefits of implementing CAMR were smaller when taking CAIR as implemented than in the scenario where CAIR was not implemented. An agency behaving according to a standard of narrow optimality may fail to account for the interdependency.
Because agencies are mostly equipped to evaluate rules in piecemeal fashion, they often miss such interdependencies. They may do so for interdependencies between individual rules, as in the example above. However, large numbers of rules may also have “cumulative effects” that only become salient or visible when the rules are viewed as a whole. Over the years, the White House has instructed agencies at various points to consider “cumulative effects”, noting that “regulated entities might be subject to requirements that, even if individually justified, may have cumulative effects imposing undue, unduly complex, or inconsistent burdens.” But we are still far from a systematic approach to the consideration of cumulative effects and interdependencies in general.
The Value of Agency Procedure
Finally, agencies do not properly analyze procedural value. That is, they often fail to evaluate formally how their processes affect the world apart from the impact of their rulemaking actions. For example, agencies have the option of using threats, promises, and other communications designed to induce the trust of private parties or coordination between regulated entities.
For example, scholars have documented the use of threat letters. The FTC, for example, has often used threat letters (sometimes secretly) in the process of enforcing laws on consumer protection, fraud, or antitrust. Agencies can also make soft law, such as by issuing guidance documents—which are legally non-binding but are mostly obeyed by private actors—to communicate intent and an intention to enforce without engaging in the formal rulemaking apparatus.
Such methods may be highly effective complements to rulemaking. The use of soft law and commitment-based mechanisms such as threats and promises may achieve benefits that are often unachievable by rulemaking alone. Yet despite the clear potential (and dangers), use of these methods is not systematic. Agencies do not produce formal analyses of the effects of non-legislative rules. Nor do they analyze, formally, the kinds of benefits a commitment to certain courses of action may have.
Towards Broad Optimality
In analyzing these “broad optimality” failures, several takeaways are salient. First, each of these shortcomings presents its own issues that potentially require their own analysis. For example, I have argued in a different Article that interdependency error should be addressed by augmenting current CBA techniques to account for rule interdependencies. By contrast, if there are concerns that agencies are foregoing beneficial rulemakings, more structure might be needed in seeing that agencies are setting their agendas effectively or making the right decisions before a Notice of Public Rulemaking occurs.
Second, the current reforms—including procedural layering and regulatory budgeting—may be too broad-brush. Regulatory budgeting might be effective for agencies with large books of highly ineffective regulations on the books. But it might stifle the regulatory process for agencies that are already underregulating. Many agencies, for example, regularly fail to meet statutory deadlines for rulemaking, suggesting that the demands made of these agencies, even when only counting statutorily mandatory rules, is too great.
Similarly, procedural layering—increasing the factors an agency must consider—may backfire. If the liberal critique is that agencies do not regulate enough (so an issue of foregone regulation exists), then more procedure—instructing agencies to consider more and more factors that aren’t regulated on enough—may be a negative if it further hampers agencies with already limited resources and time for making rules. Without a clear analysis of the broad optimality of the relevant program, solutions may well cause more harm than good.
Agencies occasionally show cognizance of each of the shortcomings I have just enumerated. But, in stark contrast to the remarkably technocratic approach to evaluating individual rules, “broad” assessments of agency action tend to be ad-hoc. The current battleground—featuring fights over reforms such as regulatory budgeting, whole-of-government approaches, or additional regulatory procedures—reflects this divide. Agencies became increasingly technical to bring order in complex environments. But the piecemeal analysis of rules no longer suffices, and a modernized approach must remain technical while providing a broad analysis of agency regulatory programs.
Vartan Shadarevian is a Ph.D. Student in Economics at Princeton University and was formerly an attorney at Skadden, Arps and Kirkland & Ellis.