Notice & Comment

Constructing Environmental Compliance, by Daniel E. Ho & Colleen Honigsberg

*This is the sixth post on a symposium on Cynthia Giles’ “Next Generation Compliance: Environmental Regulation for the Modern Era.” For other posts in the series, click here.

Textbook environmental compliance goes a bit like this: After Congress passes a statute, EPA writes rules to set pollution standards, informed by a cost-benefit analysis. Cooperative federalism means that states often carry the burden of ensuring compliance with those standards. Per the cost-benefit analysis, compliance is assumed to be pretty good, at least for a wide range of incumbent environmental regimes.

Not so, says Cynthia Giles. In an incisive volume, Next Generation Compliance: Environmental Regulation for the Modern Era, Giles overturns this conventional wisdom in three key ways. First, compliance may be much worse than you think. When EPA has randomly sampled to understand the extent of noncompliance, it is far more serious, widespread, and extensive than commonly believed. Nearly half of ethylene oxide manufacturers, for instance, violate the Clean Air Act–a statistic alarming both in its magnitude and in its age, as this statistic is roughly 15 years old but remains relevant because random sampling occurs so infrequently that we lack reliable recent evidence. Equally concerning, facilities responsible for over 95% of the nation’s petroleum refining entered into settlements with EPA for violations of the Clean Air Act. Second, conventional environmental regulation has conceived of compliance as a kind of afterthought. Nowhere is this more evident than in EPA’s cost-benefit analysis itself. Analysts are instructed to assume that there will be 100% rule compliance. While EPA pays extraordinary scientific attention to rule-setting and permit writing, such insights can be lost in the hallways of the enforcement bureaucracy. Third, Giles’ proposal for fixing this state of affairs – “Next Generation Compliance” (or Next Gen) – focuses on reducing environmental law’s rights-remedy gap by designing rules with compliance in mind. Monitoring, reporting, transparency, and data analytics are central to this toolkit, and the book provides vivid examples of common fail points and Next Gen in action.

There is much to laud about this volume. Giles’s experience as the head of EPA’s Office of Enforcement and Compliance Assurance during the Obama administration lends the book an unusual institutional grounding and domain knowledge of environmental compliance. But this is no mere administrator’s war story-telling: the book exhibits an admirable command of the academic literature, drawing on insights from tax, health, and financial regulation to draw insights for what works. It is a wise volume that should be assigned reading not just for those interested in the environment, but for anyone interested in crafting effective regulation. The fundamental lesson of Next Gen is that environmental compliance is not something that is self-executing: it is constructed by regulatory design.

We write with three reflections on the volume. First, we describe how the EPA’s assumption of 100% compliance (universal compliance) deviates from other regulatory areas and creates pernicious effects. Second, by highlighting some challenges and opportunities with environmental monitoring, we illustrate serious limitations with current measures. Third, while Giles rightly advocates for third party monitoring and auditing, we identify an important and underexplored lever in environmental governance that has had substantial effects in other areas: whistleblowing.

I. Assuming Compliance

If there is one thing that Giles’ evidence shows, it is that EPA’s assumption of universal compliance is unrealistic. The assumption is likely to vastly overstate the benefits of environmental regulation, as it will assume greater reductions in environmental pollutants than are likely to occur, resulting in the systematic under-allocation of resources to enforcement and compliance.

Consider the evidence from other areas. Perhaps the best evidence on the frequency of misreporting comes from financial accounting, where a series of monitoring and enforcement mechanisms have been developed to detect and deter misreporting. Most publicly traded companies are required to undergo an audit of the internal processes and procedures that generate their financial reports, and to undergo an audit of the financial reports themselves. If the reported financials are materially incorrect, the consequences can range from mere financial penalties to loss of a career to prison.

Nonetheless, hundreds of companies restate their financial results each year. However, few of these misstatements —“only 2% across the decade”—are attributed to fraud. Most are caused by unintentional mistakes; some even increase financial performance, making the company appear more profitable than the original numbers.

The universal compliance assumption has implications for resource allocation and how EPA itself collects information. For example, consider a permittee who consistently reports parameter values that fall just below the limit threshold—and continues to do so after the limit threshold has been reduced. Data systems designed to consider the possibility of non-compliance would automatically flag such observations for further review, with potential functionality for user explanations for reported reductions (e.g., an infrastructure improvement). But EPA’s current data systems are ill equipped to identify data integrity issues. One audit showed that twelve percent of public water systems misreported data, but “EPA and state officials placed low priority on reviewing records for invalid or falsified data” because they assumed such behavior was not “widespread.”

Compliance cannot be constructed by assumption.

II. Improved Monitoring

Some of the most compelling data about the state of environmental compliance comes from random inspections conducted by EPA. Giles laments that while such samples can “show that 50% of facilities are violating . . . it isn’t useful for taking direct action.” In contrast, when EPA began to collect information about the universe of NPDES permittees with its electronic reporting rule in 2016, that process can provide actionable information about specific facilities. Giles describes the data reporting obligation under the Clean Water Act, which requires specific inputs for each permittee and a yes/no checkbox for whether there is a violation, as a “model of clarity.”

We concur that the cleaner data collection approach under the Clean Water Act is a step in the right direction, but we remain concerned about the quality of the EPA’s NPDES data, as Giles’s discussion in other parts of the manuscript corroborates (pp. 129-32). While it was described by one EPA analyst as “the largest federal government database outside those of the Internal Revenue Service,” the database has been used nearly exclusively for reporting purposes. A recent GAO report noted serious limitations in this data, stemming in large part from vast differences in data standards across the states. Only “two of 17 states [that were recently assessed] met expectations for the accuracy and completeness of the data recorded in the agency’s national database.” As a result, GAO concluded, “EPA cannot be certain what its measure is showing.” These are the epistemic costs of cooperative federalism.

To make matters worse, the vast majority of the data lies in self-reported water quality testing data. EPA’s criminal enforcement division has found substantial evidence of fraudulent reporting, so no violation doesn’t necessarily mean no violation. As described previously, compliance cannot be assumed.

The result is that while it is a large database, for sure, it may not provide immediately actionable data and cannot substitute for high fidelity ground truth data on environmental performance. And here is exactly where randomly sampled data becomes critical: it is conducted under a common standard and representative. As a result, randomly sampled data can be used to develop approaches that both estimate the rate of violation and provide a risk model of environmental harm (i.e., actionable information about specific facilities). As Giles notes, new technologies will be critical to leapfrog older forms of environmental monitoring that rely on self-reporting and physical inspections.

One of the most promising approaches, as we have written about elsewhere, lies in the use of remote sensing and machine learning to spot environmental problems in automated ways. Yet EPA needs an R&D infrastructure for driving forward such innovation: a “Bell Labs” for compliance innovation. What could such an entity achieve? In the 1990s, an engineer at an EPA lab prototyped a device that could measure emissions of cars while driving – i.e., a device that would have detected Volkswagen’s cheating on emissions tests. Yet, in a short-sighted move that cost Volkswagen alone over $30B, that EPA lab was shuttered due to budget cuts. Congress and EPA should build on existing initiatives and create this Bell Labs for compliance innovation, with high-level leadership and resourcing to design, prototype, and evaluate the most powerful path forward.

III. Whistleblowing

Giles rightly espouses third party monitoring and auditing as mechanisms for improved compliance. Yet there is one mechanism that receives little attention but which has been remarkably effective in other domains: whistleblowing.

Securities law shows the powerful effects of incentivizing whistleblowing. The Securities and Exchange Commission (SEC) has long had a whistleblowing program to incentivize individuals to report insider trading, and Section 922 of the Dodd-Frank Wall Street Reform and Consumer Protection Act broadened the program to incentivize reporting for all types of securities fraud. The expanded program pays 10 to 30 percent of total monetary sanctions collected through any related regulatory enforcement, provided that  monetary sanctions exceed $1 million. The program also provides strengthened protection for whistleblowers against retaliation.

The results have been impressive. Since 2011, the SEC has awarded more than $1 billion in whistleblower awards, and has received more than 26,000 tips. Over 12,000 such tips were received in 2021 alone. With one exception, the number of tips has increased monotonically in each year since 2012, highlighting the growing popularity of this program. As noted by the SEC, “[w]histleblowers make a tremendous contribution to the agency’s ability to detect securities law violations and protect investors”.

Even these numbers mask the full scope of whistleblowing in the securities area, as Section 301 of the Sarbanes-Oxley Act requires publicly traded companies to maintain internal, confidential whistleblowing hotlines for employees to report corporate misreporting. The intuition for this mandate is that top managers may be unaware of misconduct, but will stop it upon learning of its existence through a whistleblowing hotline.

Similarly, in the environmental space, some employees may be uncomfortable with environmental reporting practices, so providing those employees with a confidential reporting mechanism could create a channel between these employees and management. With this in mind, we were pleased that the EPA’s recent proposal to reduce methane pollution requires “operators to respond to credible third-party reports of high-volume methane leaks.”

In short, whistleblowing – and data-driven prioritization to proactively prioritize tips – merits serious consideration in the Next Gen toolkit.

* * * *

We close by saying that we admire this volume. Giles manages to see the forest for the trees, to provide an incisive and clear-headed critique of environmental regulation, and develop an actionable and concrete path forward.

Daniel E. Ho is William Benjamin Scott and Luna M. Scott Professor of Law; Professor of Political Science; Senior Fellow, Stanford Institute for Economic Policy Research; Director, Regulation, Evaluation, and Governance Lab (RegLab); Associate Director, Stanford Institute for Human-Centered Artificial Intelligence (HAI); Stanford University. 

Colleen Honigsberg is Professor of Law and Bernard Bergreen Faculty Scholar; Fellow at the Stanford Institute of Economic Policy Research.

Print Friendly, PDF & Email