For decades, the U.S. Environmental Protection Agency (EPA) has calculated the benefits of proposed regulations. These calculations are necessary both to justify regulation under the language of statutes they implement, such as the Clean Air Act, and to present the cost-benefit analysis that has been required under executive orders since the Reagan Administration.

Very often, as was the case for EPA’s 2012 assessment of the benefits of tightening the standard for fine particulates known as PM 2.5, EPA estimates regulatory benefits by multiplying epidemiologists’ estimate of the reduction in premature mortality that a new standard would generate by the value of a statistical life reported by economists.

For other regulations, such as the so-called tailpipe regulation of 2010 that raised automobile Corporate Average Fuel Economy (CAFE) standards, EPA has relied in part on epidemiological studies to get a value for statistical lives saved, but also in part on purely economic studies of the value of higher mileage vehicles to consumers.

During the spring and summer of 2018, EPA issued two rulemaking proposals that together could significantly improve its benefit calculation process. The first, and most important rule, proposed in April, would require EPA to ensure that the data justifying its regulations be “publicly available in a manner sufficient for independent validation.” The second proposal, issued in June, is a request for comment on potential EPA rulemaking on how the agency conducts cost-benefit analysis.

Both proposals may seem abstract and technical. But, in fact, they could fundamentally change the process by which EPA calculates regulatory benefits. Indeed, they could effect such a fundamental improvement to the EPA’s calculation of regulatory benefits that under processes revised to conform to the new regulations, benefits under both the 2012 PM 2.5 standard and the 2010 tailpipe rule would be revealed to be much smaller than EPA reported, and far smaller than their costs.

Consider first the April proposed rule that would require the public availability of the data and methods used to calculate regulatory benefits. By requiring that EPA regulations be based on data that are actually available for statistical testing by outside, non-EPA affiliated researchers, this rule would bring EPA practices up to the standard exercised in every scientific discipline, where peer-edited journals require that authors share data with other researchers and, in some instances, even require study authors to post online both data and statistical code used to derive published results. The reason that scientific journals require these disclosures is fundamental to the scientific process: Unless the authors of published work share their data and methods, future researchers cannot verify whether the reported results can be replicated and relied upon as a basis of further work.

Public availability of data and methods is crucial to the progress of science because only with such availability can scientists avoid going down dead-end roads built on previous results that were actually false. Such public availability is even more important in the regulatory context, where false scientific findings do not just cause researchers to waste time and money pursuing dead-end research, but they also justify regulations costing billions of dollars and sometimes thousands of jobs. Before such regulations are issued, the underlying science must be publicly available for replication and critical review.

The environmental lobby has objected that the April proposed rule would cut the legs out from important regulations that have been justified by studies—epidemiological studies, in particular—whose data have never been made publicly accessible for replication and review. This is true, but it is not a bad thing.

To see why, consider a particularly important EPA rule: the Obama-era EPA’s 2012 rule tightening the standard for fine particulates or fine PM—that is, particles of dust and various pollutants less than 2.5 microns in diameter. As reported by the Office of Management and Budget, since 2004, the monetized value of the benefits from tougher fine PM standards make up the majority of—and for some years over 80 percent of—the quantified benefits of all regulatory benefits. The Obama EPA used the quantified benefits of fine PM reduction not only to support the new fine PM standard, but also as a side effect or “co-benefit” justifying tougher standards for other air pollutants, such as ozone, oxides of nitrogen, and even greenhouse gases.

In calculating the benefits of toughening the fine PM standard, the Obama EPA relied heavily on two long-term studies—called cohort studies—of particular individuals exposed to varying levels of fine PM. The goal of a cohort study is to see whether exposure to a particular pollutant or engaging in a particular behavior, like smoking, increases mortality risk. Cohort studies are widely used in epidemiology and biomedical research. Like clinical trials for new drugs—where a known group of people are given a new drug, and two other groups are given a placebo and an existing drug already on the market—cohort studies enlist a known group of individuals for study.

The environmental lobby argues against EPA’s proposed rule primarily by noting that data from such cohort studies cannot be made publicly available without compromising promises of confidentiality made to study subjects. This argument is ludicrous. As the EPA’s April proposed rule notes, there are well-known protocols for ensuring that cohort study data can be shared for purposes of replication and further scientific work without compromising the identity of study participants.

For example, following the standard practice of the Organization for Economic Co-operation and Development (OECD), the Medical Research Council of Great Britain requires the sharing of data from any study that it funds. At the same time, the Medical Research Council also requires that “data-sharing agreements must prohibit any attempt to identify study participants from the released data or otherwise breach confidentiality” or “make unapproved contact with study participants.” EPA can and should impose precisely the same requirements.

During the Obama Administration, EPA’s fallback to the obvious absurdity of the argument that making data publicly available would compromise confidentiality was to make an even more absurd argument: that EPA could not get the data because it was owned by researchers who had conducted the cohort studies, not by EPA.

The problem with this argument is that both of the long-term cohort studies that provided the primary evidence for EPA’s 2013 toughening of the fine PM standard were primarily funded by EPA itself. For example, as I describe in more detail in my chapter in the forthcoming Cato Institute book, Science and Liberty, the Health Effects Institute, which paid for reanalysis and an extended reanalysis of a long-term cohort study called the Harvard Six Cities Study, has received at least $87 million from EPA since 2000. If EPA is paying for research, there is no reason in the world why it cannot follow the standard OECD practice of requiring both that data from such studies be shared with other researchers and that safeguards be put in place to ensure that confidentiality is not breached in that sharing.

Scientific journals these days routinely contain a list of articles corrected or retracted entirely because the results they reported have failed the scientific litmus test of replication. Certain fields, such as social psychology, now operate under a perceived “replication crisis” due to the discovery that large bodies of supposedly true results established by stars in the field were either based on fabricated data or simply cannot be replicated.

EPA’s proposed rule requiring the public availability of data does no more than make EPA current with best scientific practice. Without such public availability of data, EPA regulations are not based on the scientific process as it is now undertaken; they cannot plausibly be considered science-based.

As for EPA’s June proposed rule asking whether it should issue regulations setting standards for how it performs requisite cost-benefit analyses, my answer is a resounding “yes.”

There has been so little consistency in how EPA has chosen to calculate regulatory benefits that the exercise has often seemed much less an actual objective analysis and much more an attempt to find any methodology which results in estimated benefits big enough to justify a proposed rule using cost-benefit analysis.

Many examples of this inconsistent cost-benefit analysis advocacy exist. In the case of the 2010 CAFE standards, EPA rejected a large number of economic studies showing that consumers did not attach high value to increased mileage, choosing instead to multiply an estimate of reduced gallons of gas consumed by the estimate price of gasoline. The latter approach, which uses estimates of future miles driven and gasoline prices that are subject to enormous uncertainty, had the apparent virtue of increasing the value of the CAFE standard.

A few years later, around 2011, EPA initially proposed to value alternative approaches to regulating power plant cooling water intake by simply asking people how much they would hypothetically pay to protect various fish populations and aquatic ecosystems. These contingent valuation surveys attempt to elicit stated preferences, not preferences revealed by the actual choices people make. Many economists who do not make a living running these surveys believe that they do not reliably measure people’s actual willingness to pay for anything.

Finally, in many of its Obama-era greenhouse gas regulations, including the 2015 Clean Power Plan, which set guidelines for states to follow in ending electric utility generation from coal-burning power plants, the biggest EPA-calculated benefit of reducing greenhouse gas emissions was the lower Social Cost of Carbon (SCC). But those SCC estimates assumed that people are very limited in their ability to adapt to climate change. Recent work has proven this assumption to be fabulously wrong as indeed it had to be, since if humans could not adapt to differing climates, our species should have vanished with the onset of the Holocene, the Netherlands should not exist, and American settlers should have failed to establish agriculture on the Great Plains.

The only standards that EPA has seemed to apply consistently in doing cost-benefit analysis are to choose whatever approach generates the largest regulatory benefits and to obfuscate rather than fully disclose the uncertainty of its estimated benefits and the dependence of those estimates on certain, and often questionable, underlying methodological choices. EPA should issue regulations setting guidelines for how it will estimate both regulatory benefits and costs, specifying that certain methodologies—such as stated preference estimates from contingent valuation—will be disfavored as unreliable.

These regulations should also establish consistency in how EPA deals with wide variation and uncertainty in estimates. In the tailpipe rule, EPA did not use any empirical estimates of consumers’ revealed market preference for increased gas mileage, explaining that it ignored those because there was too much variation in reported estimates. In getting an SCC estimate, by contrast, an Interagency Working Group generated an SCC estimate for EPA’s use despite enormous variation in estimated SCC—with estimates ranging from large negative SCCs, that is, net benefits from increasing atmospheric carbon dioxide concentrations, to positive SCCs in the hundreds of dollars per ton of carbon dioxide. The Interagency Working Group reported SCC estimates from three models with particular assumptions—most importantly, assuming very limited ability to adapt to future climate change.

EPA should by regulation specify how it would treat uncertainty in estimated regulatory benefits: If some studies are to be given weight while others are ignored, EPA must clearly explain which general scientific principles determine the weighting choice. Only through such regulatory guidance can EPA end its practice of ignoring uncertainty when doing so allows it to choose studies that support large regulatory benefits and dismiss studies with very low estimated benefits that weaken the case for regulation.

A particular topic that EPA’s cost-benefit analysis regulations should address is how it will calculate the total benefits of reducing one pollutant—call it pollutant A—when the actions taken to reduce that pollutant also reduce the levels of another pollutant—call it pollutant B—that is also regulated directly. This is known as the “co-benefits” problem in cost-benefit analysis, with the co-benefits being the reduction in pollutant B that arise from regulations targeting pollutant A.

Take ozone as pollutant A and fine PM as pollutant B. EPA’s regulation on cost-benefit analysis should clearly specify that in calculating the total benefits from reducing ozone, side benefits from reducing fine PM should be counted in the cost-benefit analysis only if the reductions in fine PM—due to the reductions in ozone—have been credibly established to be over and above those achieved by regulating fine PM directly. If this is not done, and, for example, the benefits of regulations targeted at reducing fine PM directly are added as an indirect benefit of regulations targeted at reducing other pollutants, then benefits of reducing the other pollutants may be vastly overestimated. New EPA regulations should clearly set out that benefits from reducing pollutant B should be included as a co-benefit of reducing pollutant A only to the extent that reductions in pollutant B have been credibly established to result solely from the new regulation under consideration.

EPA’s rulemakings have increasingly been based on statistical analysis that has never been subject to rigorous testing and critique, and its cost-benefit analysis on oftentimes unreliable economic methods. The two rules proposed by EPA during the summer of 2018 can do much to restore the scientific and economic integrity of EPA regulations.