*This is the eighth post in a series on Michael Livermore and Richard Revesz’s new book, Reviving Rationality: Saving Cost-Benefit Analysis for the Sake of the Environment and Our Health. For other posts in the series, click here.
Mike Livermore and Ricky Revesz should be applauded for doing the near-impossible in Reviving Rationality—spinning a lively and engaging narrative about the driest of subjects, the conduct of cost-benefit analysis (CBA) by government bureaucrats. The story they tell has all the elements of a classic morality play—a hero (cost-benefit analysis), a villain (Donald Trump), and an epic struggle between good and evil (or, more specifically, between rationality and irrationality). I certainly enjoyed the ride, and there’s no question that Livermore and Revesz have done the world a great service by so carefully and painstakingly documenting so many of the excesses and outrages of the Trump administration’s approach to regulatory policy. This book will undoubtedly take its place in an easy-to-reach spot on my bookshelf next to their previous volume, Retaking Rationality, which has been an indispensable reference.
But when it comes to the overall argument of the book—that we should “revive” cost-benefit analysis in order to return to the halcyon days before Trump, when CBA channeled our better natures to forge a bipartisan consensus and erect the “guardrails” that kept the vast apparatus of the federal bureaucracy headed down the middle path of moderation, rationality, and expertise—I feel a nagging skepticism. I worry that Livermore and Revesz have skated over too many important issues and elided a few too any important distinctions. As a result, their argument at times begin to feel like a truism: Rational agency decision making is better than irrational agency decision making. They’ll get no argument from me (or most anyone else of good conscience) on that point. But where’s the connection between rationality and CBA? I fear it was more often asserted or assumed than demonstrated.
I’m also left scratching my head about the meaning of all those Trump administration horror stories. Were the Trump administration’s manipulations of CBA an aberration, or simply the predictable result of a decision tool that is in many contexts indeterminate and consequently ripe for cynical manipulation?
Other questions nagged at me as well: What is this thing called CBA anyway? And does CBA really have a monopoly on rationality, as the book’s title suggests, or are there other decision tools that can assist agencies in making rational choices in particular contexts?
One of the Trump administration shortcomings Livermore and Revesz focus on is the failure to give adequate weight to unquantifiable regulatory benefits in justifying the rollbacks of Obama administration rules. Indeed, it’s striking just how many rules the authors catalogue in Chapter 5 for which the Obama administration was unable to quantify key benefits. And many of them involved environmental externalities, precisely the area of regulation that Livermore and Revesz identify as the sweet spot for CBA—the type of regulation for which CBA is “the most comfortable fit.”
But is it really so surprising that when key benefits couldn’t be quantified, the Trump administration exercised its judgment to find them small and insignificant despite the Obama administration’s judgment that they were big and consequential? Aren’t unquantified benefits by definition squishy? Don’t they by their very existence open up vast expanses of agency discretion in CBA that administrations of different political stripes can easily exploit for ends-driven gains? At least in those instances where significant benefits remain unquantified, I begin to wonder whether those guardrails are as secure as they might at first appear.
Perhaps sensing the infirmity in this line of argument, when they first introduce readers to the Obama administration’s use of CBA in Chapter 3—its triumphant “synthesis” of CBA with progressive regulatory goals—Livermore and Revesz begin with the quantification success stories. They describe a whole series of environmental rulemakings for which the Obama EPA was able to produce “massive” dollar estimates of regulatory benefits—benefit numbers big enough to swamp the costs, sometimes by as much as 100 to 1. But a closer look reveals that even within EPA—the agency whose rules presumably all fit within that CBA sweet spot—these success stories represent the exception, not the rule. All are Clean Air Act rules and even though they aim at a variety of different air pollutants, there’s just one single pollutant—particulate matter—that’s doing all the work behind these “massive” numbers. Particulate matter accounts for over 95% of benefits under the Cross-State Air Pollution Rule, 99% under the MATS rule, 99.9% under the 2010 SO2 NAAQS revisions, and (of course) 100% of the 2013 particulate matter NAAQS revision.
The problem is that of the dozens of pollutants EPA regulates, particulate matter is really the only one for which the agency has extensive data. (Even here, the numbers are far from complete, leaving out cancer and other long-term effects.) And that’s not just because we haven’t gotten around to generating good data on the others. Particulate matter is unique. For one thing, it is especially easy to monitor, so we have more hard data on it than on other pollutants. In addition, it causes diseases that occur relatively quickly after exposure and are therefore possible to capture in short-term studies, unlike diseases like cancer, for which subjects must be followed over decades.
The bottom line is that, despite the quantification success stories highlighted by Livermore and Revesz, unquantified benefits remain a pervasive problem for EPA. Even putting aside inherently unquantifiable values like dignity, fairness, and equity, plain old data limitations prevent the agency from monetizing significant categories of benefits most of the time. Between 2002 and 2015, it happened in 80% of EPA’s major rulemakings.
Livermore and Revesz’ solution to the problem of unquantified benefits is simply to quantify them. Indeed, they fault the Obama administration for failing to quantify more of the benefits to aquatic species and ecosystems of its Cooling Water Intake Structure Rule. Particularly galling in their view was the fact that EPA actually conducted a stated preference survey in an attempt to capture some of these non-market benefits, only to chicken out in the end and decide not to use it. To Livermore and Revesz, this was a missed opportunity to “push forward” new quantification techniques.
But are they perhaps underestimating the real barriers to quantification in instances like this? EPA decided not to use the study only after publication of the results touched off a firestorm of controversy, with industry economists insisting the results were so vastly over-stated as to be “implausible” and “deeply flawed” while the environmentalist’s economists pointed to errors they claimed skewed the results far below their actual value. Rather than an example of unnecessary “methodological timidity,” EPA’s decision was arguably a quite rational response to a no-win situation. Indeed, stated preference surveys, which literally involve asking random members of the public how much they’d be “willing to pay” to save a species or clean up a water body, are notoriously prone to controversy—hardly the stuff firm guardrails are made of.
In the end, if significant benefits can’t be quantified and converted to dollars in a way that doesn’t just stoke controversy, then benefits can’t be directly compared to costs and CBA doesn’t really work. Or at the very least, it loses the edge it might have seemed to have over other decision tools. You can try to give what you’re doing a fancy name, like “break-even analysis” that makes it sound all “analytic” and sophisticated, but in the end, all you’re really doing is operating on intuition—comparing apples to oranges.
Which brings me to my second question: What is this thing called CBA anyway? The term encompasses a broad variety of decision tools, but I found myself struggling to try to figure out which one Livermore and Revesz were talking about. There were a lot of references to maximizing net benefits (or welfare or well-being). But there was also a lot of talk about how, sometimes, important benefits will be unquantifiable and a good CBA needs to take those into account nonetheless. But there’s an internal contradiction here. You can’t calculate net benefits if significant benefits remain unquantified. And you certainly can’t determine which of a range of alternatives maximizes net benefit. The best you can do (if you’re lucky) is to say whether benefits exceed costs for a particular regulatory alternative that you’ve chosen by other means.
So, I get the nagging feeling there’s a bait and switch going on. By talking imprecisely and neglecting to define terms, Livermore and Revesz create the impression that CBA is going to enable agencies to maximize net benefits and put in place firm guardrails in the form of incontrovertible mathematical justifications for rules that can’t be easily manipulated when the political winds shift. But if it turns out that most of the time, unquantified benefits prevent the erection of those firm guardrails that formal CBA promises us, it’s not so clear CBA has much of a leg up after all on the other decision tools agencies routinely use in environmental decision making.
But wait, there are other tools? Well, yes, and this brings me to my third question. Remarkably, Livermore and Revesz get through the entire book without mentioning the CBA alternatives that are actually the bread and butter of agency decision-making in the environmental realm—feasibility analysis, health-based standard setting, cost-effectiveness analysis, and open-ended balancing. This is a particularly stunning accomplishment given that these are by and large the decision tools that the environmental statutes direct the agencies to use in setting standards. (High School civics review: When there’s a conflict between a congressionally enacted statute and a unilaterally issued executive order, which one prevails?)
This giant omission means that Revesz and Livermore neglect to mention, for example, what decision tool EPA actually used in crafting the two exemplars of Obama’s CBA-progressive policy synthesis—the Cross-State Air Pollution Rule and the MATS Rule. (Spoiler alert: It wasn’t CBA.) EPA didn’t use CBA to set the standard for either of these rules because in each case the statute didn’t call for CBA. In each instance, EPA used a version of feasibility analysis instead.
While they may not feel as theoretically satisfying as CBA, with its pedigree in welfare economics, when agencies find themselves confronting real-world problems with all their messy data gaps and bounded rationality problems, these other tools often provide the most rational and effective approach. Indeed, in many of the success stories of American environmental law, it was these scrappy, street-smart decision tools in the trenches, doing the work.
Revesz and Livermore ultimately seem uncertain whether the morality play they describe will have a happy or an unhappy ending—whether rationality will in fact be successfully “revived.” While I certainly appreciate their trepidation about the road ahead, I also wonder whether, ironically, the part of the story they left out might be where we’d be most apt to find a few seeds of hope. In the epic battle between the forces of rationality, evidence, and expertise versus raw political power, corruption and manipulation, there are daunting challenges to be sure. But even when the alluring promise of a public policy backed by incontrovertible mathematical calculation evades us, there are a variety of time-tested and effective tools ready to be drafted to the cause of “reviving rationality.”
Amy Sinden is a Professor of Law at Temple University and on the Board of Directors of the Center for Progressive Reform.