Since 1992, federal taxpayers have helped fund construction of urban rail transit lines through a program called New Starts. This program is due to expire in 2020, and today the Highways and Transit Subcommittee of the House Transportation and Infrastructure Committee will hold a hearing on whether or not to renew it.
No doubt most of the witnesses at the hearing will be transit agency officials bragging about how their expensive projects have created jobs and generated economic development. But a close look at the projects built with this fund reveals that New Starts has done more damage to American cities than any other federal program since the urban renewal projects of the 1950s. Here are eight reasons why Congress should not renew the program.
1. New Starts encourages cities to waste money. The more expensive the project, the more money New Starts provides, so transit agencies plan increasingly expensive projects to get "their share" of the money. As a result, average light-rail construction costs have exploded from under $17 million per mile (in today’s dollars) in 1981 to more than $220 million a mile today.
2. New Starts encourages cities to build obsolete technologies. There are good reasons why more than a thousand American cities replaced their rail transit lines with buses between 1920 and 1970: buses cost less and can do more than trains. A train can hold more people than a bus, but for safety reasons a rail line can only move a few trains per hour. A busway can move hundreds of buses and twice as many people per hour as any light-rail line. As one recent report concluded, "there are currently no cases in the US where LRT [light-rail transit] should be favored over BRT [bus-rapid transit]."
3. Rail transit often increases congestion. Light rail, streetcars, and even new commuter-rail lines often add more to congestion by running in streets or at grade crossings than the few cars they take off the road. The traffic analysis for Maryland's Purple Line, for example, found that it would significantly increase delays experienced by DC-area travelers.
4. New Starts forces transit agencies to go heavily in debt. New Starts pays only half of construction costs, and transit agencies often borrow heavily to pay the other half. This leaves them economically fragile so that, to avoid going into default in an economic downturn, they are forced to make heavy cuts in transit service.
5. New Starts forces cities to double-down on subsidies to generate rail ridership. To promote ridership and attract so-called transit-oriented developments near rail lines, many cities subsidize such developments through tax breaks, infrastructure subsidies, and direct financing. This has added billions to the cost of rail transit projects.
6. Far from promoting economic development, New Starts may actually slow economic growth. A Federal Transit Administration-funded study concluded that, "Urban rail transit investments rarely 'create' new growth, but more typically redistribute growth that would have taken place without the investment." Yet transit agencies claim that every gas station, auto dealership, and parking lot built near a rail line was somehow stimulated by that line. The reality is that the high taxes imposed to pay for rail construction and subsidize transit-oriented developments are likely to discourage employers from moving to urban areas with new rail transit lines.
7. New Starts harms low-income commuters. Most new rail lines aim to get middle-class people out of their cars, but when the inevitable cost-overruns take place, transit agencies often cut bus service to low-income neighborhoods. Due to service cuts and fare increases, for example, Los Angeles has lost more than four bus riders for every rail rider it gained from opening new rail lines.
8. Rail transit harms the environment. Some rail transit is electrified, but except in the Pacific Coast states most of that electricity still comes from burning fossil fuels. The Washington Metrorail system uses more energy and emits more greenhouse gases per passenger mile than the average car, while DC’s H Street streetcar is more environmentally harmful than a coal-rolling truck.
Transit is a local matter and should be funded at the state or local level, not by federal taxpayers. If Congress is going to fund transit at all, it should give transit agencies incentives to focus on riders, not contractors. Instead of renewing New Starts, Congress should fund transit agencies according to the amount of fares they collect, allowing the agencies to spend the money on buses or trains and on capital improvements or rehabilitation of worn-out systems. In addition to encouraging agencies to increase revenues, not costs, this would more fairly distribute federal dollars to the regions that need them the most.
An article in the Los Angeles Times last week frets that Los Angeles transit buses are "hemorrhaging riders," which is supposedly "worsening traffic and hurting climate goals." In fact, the decline of bus transit is actually helping California achieve its climate goals.
In 2017, Los Angeles Metro buses used 4,223 BTUs and emitted 349 grams of greenhouse gases per passenger mile. By comparison, the average light truck used only 3,900 BTUs and the average car just 2,900, with light trucks emitting 253 grams and cars 209 grams per passenger mile. By raising bus fares and reducing bus service, L.A. Metro is getting people out of dirty buses and into clean cars.
Of course, L.A. Metro officials probably don't realize they are doing that. They are so bone-headed that they want to convert a dedicated bus route into a light-rail line in order to "increase its capacity." At present, they run a maximum of 15 buses an hour on the dedicated bus lanes, which is less than 6 percent of its capacity.
Dedicated bus lines in other parts of the world move as many as 30,000 people per hour in each direction. By comparison, no light-rail line can move more than about 12,000 people per hour. As one study concluded, "there are currently no cases in the US where LRT [light rail transit] should be favored over BRT [bus-rapid transit]."
Los Angeles Metro's CEO is currently paid well over $300,000 a year, which is almost twice as much as the governor of California and far more than the director of the state Department of Transportation, whose agency moves far more people and ton-miles of freight per day than Metro moves in a month. Yet Metro's CEO is not being paid to move people, but to separate people from their tax dollars, and so far he is doing that very well.
For more information about the future of public transit, see my recent article about LA Metro's climate strategy.
Should you be worried about mercury emitted from power plants?
Sure, but only if you are a pregnant woman, who during gestation consumes about 220 pounds of fish caught from exclusively the top ten percent most polluted fresh waters of the United States, despite all the signs along these rivers and lakes warning “DO NOT EAT THE FISH!”
Don’t take my word for it. I’m simply relaying EPA science. And not the ‘bad” kind produced by the Trump administration; rather, I’m talking about virtuous EPA science as practiced by the Obama administration.
A little background: mercury emissions aren’t a direct threat to humans, but instead settle onto water bodies, and then make their way up the aquatic food chain. Because mercury is a neurotoxin, the fear is that pregnant women can engender developmental disorders in their offspring by eating fish that have bio-accumulated the toxin.
In the course of promulgating the Obama-era Mercury and Air Toxics Standards for power plants, the EPA stated that it considers “IQ loss estimates of 1-2 points as being clearly of public health significance,” even though this low a number rests comfortably within the error of measurement inherent to an IQ test. According to the EPA’s analysis, the Mercury Rule was necessary to prevent an IQ loss of 1.1 points supposedly suffered by children born to a putative population of pregnant women from substance families, who during their pregnancies eat 220 pounds of self-caught fish reeled in from the most polluted bodies of fresh water. Notably, the EPA failed to identify a single member of this supposed population. Instead, these women were modeled to exist.
Even under EPA’s ultra-accommodating analysis of its rules’ benefits, the agency pegged the benefits of the Mercury Rule at a mere $6 million. In stark contrast, the agency estimated that the rule would cost about $10 billion annually, making it one of the most expensive regulations ever.
On its face, such an imbalanced cost-benefit ratio is plainly unreasonable. But the EPA pointed to the rule’s “co-benefits,” which were estimated to be dwarf the rule’s costs.
So, what are these “co-benefits”?
Retrofitted air pollution controls employ either a filter or a chemical reaction to capture pollutants from the power plant exhaust flue. Though these controls are optimized for the specific pollutants they are designed to reduce, other pollutants also are captured. With the Mercury Rule, the EPA claimed that “co-benefits” attendant to the required mercury controls would amount to $37 billion annually.
On the one hand, I agree with Prof. Cass Sunstein, who has argued that it would be foolish to ignore readily evident costs and benefits, regardless whether they are direct or indirect. On the other hand, there’s an element of duplicity behind the EPA’s co-benefits, to which I object.
Co-benefit pollutants are known as “criteria pollutants,” and they are regulated by the Clean Air Act. Indeed, criteria pollutants are regulated at a level that is “requisite to protect the public health” with “an adequate margin of safety.” That is, these pollutants are regulated by an entire statutory program at standards that go beyond what is necessary to protect public health.
The EPA, moreover, is not allowed to consider costs when it regulates “criteria” pollutants. It is somewhat ironic that the agency is attributing billions of dollars worth of benefits to the reduction of a pollutant beyond stringent public health limits set by EPA without considering costs.
The other problem with EPA’s “co-benefits” is the likelihood of double-counting. Since the George W. Bush administration, the EPA has justified most of its major rules by relying on “co-benefits.” Yet the agency isn’t keeping a running tab of its claimed “co-benefits.” As a result, it’s almost certain that the agency has counted the same benefits twice or more.
Recently, EPA Administrator Andrew Wheeler initiated reforms regarding the agency’s use of of costs and benefits in the rule-making process. While there’s no reason for the agency to categorically ban the use of co-benefits, the EPA should render their use reasonable. To this end, the EPA should refine its analysis to account for doubly-counted co-benefits. The agency also must inform the public how much of the co-benefits valuation can be ascribed to pollution reductions below a level that the agency already had determined to be “requisite to protect the public health” with “an adequate margin of safety.”
An article in last week's New York Times joins others in asking us to sympathize with the beleaguered transit industry, whose ridership has dropped every year since Uber and Lyft arrived on the scene. The article notes that Uber and Lyft subsidized the 5.6 billion rides they carried last year to the tune of $2.7 billion, or almost 50 cents a ride.
"The risks of [transit] privatization are grave," the Times article warns. Uber and Lyft are taking "a privileged subset of passengers away from public transit systems" which "undermines support for public transportation."
What the article doesn't say is that, in order to carry 9.6 billion riders last year, public transit demanded more than $50 billion in subsidies from taxpayers, or more than $5 per ride. In other words, transit subsidies per rider are more than ten times greater than Uber and Lyft subsidies.
I shouldn't have to say this, but there is also a crucial difference between ride-hailing subsidies and transit subsidies: the money Uber and Lyft are spending is voluntarily given to them by investors who hope to eventually make a profit. Tax subsidies are taken involuntarily from taxpayers to support systems that, as long as they are publicly owned, will never come close to making a profit.
Instead of bemoaning the loss of transit riders to ride-hailing services, we should be celebrating the fact that a fast, convenient, and affordable service is taking away the need to subsidize slow, inconvenient, and expensive transit systems. It's worth adding that Uber and Lyft might not be losing $2.7 billion a year if they didn't have to compete with a transit industry that gets $50 billion in annual subsidies.
Further, the argument that ride hailing is stealing well-off passengers away from transit doesn't stand up to the facts. As I've shown elsewhere, census data reveal that low-income people are buying cars and reducing their use of transit for commuting. The biggest growth market for transit is among people who earn more than $75,000 per year. They don't need other taxpayers to subsidize their rides to work.
Congress will revisit these issues next year when it has to reauthorize federal highway and transit spending. Today, the Cato Institute published my new report urging Congress to put transportation programs on a pay-as-you-go basis, with funding mainly out of user fees rather than tax dollars.
For those who are interested in finding out what is happening to transit in your urban area, I've created a spreadsheet that has charts showing key variables for transit systems in more than 100 urban areas. As described in the instructions for this spreadsheet, all users have to do is find the number of the urban area they are interested in on the spreadsheet, enter that number in cell F1, and it will automatically make eleven charts for that area.
Today is the 150th anniversary of the pounding of the gold spike that represented completion of the first transcontinental railroad. Union Pacific, which now owns the complete route, plans to bring its newly restored Big Boy steam locomotive to Ogden to recreate, with 4-8-4 locomotive 844, the joining of the UP and Central Pacific in 1869. Numerous museums and history societies are planning exhibits and meetings.
While it would be fascinating to watch the Big Boy operate, you'll have to excuse me for otherwise being unenthused about this event. As I see it, the first transcontinental railroad was the biggest boondoggle in nineteenth-century America, and one that -- as later railroads proved -- we could have lived without. Unfortunately, it is still being cited as an example of why twenty-first century America should do even more foolish things like build high-speed rail.
Railroads were the high-tech industry of the mid-1800s. They revolutionized both passenger travel and freight movement and mode it possible to farm and extract resources in remote locations. Yet, like today's high-tech industries, many planned and some actual railroads were little more than securities schemes to separate naive investors from their money. The first transcontinental railroad did so on a grand scale, relying not on naive investors but a gullible Congress willing to give away tax dollars and resources.
The story is told in detail in Railroaded: The Transcontinentals and the Making of Modern America by Stanford University historian Richard White. To entice investors into building a continuous line from the Mississippi River to the Pacific shore, Congress agreed to give the railroads 10 square miles of land for every mile of track. In addition, it would loan builders $16,000 a mile for construction on flat lands and $48,000 a mile in the mountains.
The leaders of the Central Pacific (which built from California) and Union Pacific (which built from the Mississippi) quickly realized that immediate profits could be made by contracting construction out to themselves. They created separate companies that built the railroads as cheaply as possible, then billed the government the full amount of the available loans. The railroads were committed to eventually repaying the loans, but that responsibility belonged to a different set of investors, while a narrow inner circle of each railroad's leaders owned the highly profitable construction companies.
The Central Pacific even went so far as to hire a geologist who claimed that the Sierra Nevada Mountains started just outside of Sacramento, many miles away from the real mountains, so they could collect the full $48,000 a mile for that segment. Using techniques like this, the people who owned the construction companies earned millions of dollars in profits before the railroads were even completed.
Later, when people became suspicious about the loans, the companies bribed the vice president, secretary of the treasury, and important members of Congress to shut them up. Eventually, some unbribed Congressmen held a hearing and subpoenaed the Central Pacific's books to find out how much it really spent on construction. Charles Crocker, who was treasurer of both Central Pacific and its construction contractors, appeared at the hearing to announce that the books had mysteriously gone missing. He thought that maybe his partner, Mark Hopkins, had accidentally thrown them out. Hopkins wasn't there to testify, having conveniently passed away a few years before the hearing.
Altogether, the Central and Union Pacific railroads received $55 million in loans (about a billion dollars in today's money) and 19.5 million acres of land. Although they were supposed to sell this land to actual settlers, they ended up keeping much of it because, even with the railroad nearby, few were willing to try to farm in the mountains or Great Basin. Although the land was not immediately valuable, by the mid-twentieth century the lands the railroads still owned were probably more valuable than the railroads themselves.
Congress learned its lesson and gave no more loans to railroad construction in the nineteenth century. But it still gave land grants to the Northern Pacific, Atlantic & Pacific (Santa Fe), and Southern Pacific to build more transcontinental railroads. As if to demonstrate that it was too soon to build a transcontinental railroad, the Union Pacific, Northern Pacific, and Santa Fe all went bankrupt in 1893. The Central Pacific escaped bankruptcy because of its access to Nevada silver mines, while the Southern Pacific almost went bankrupt but was saved only when its leader, Colis Huntington, illegally diverted funds from the Central Pacific, which he also controlled, to the SP.
Both UP and SP eventually repaid the loans (by which time most of the people who had become millionaires from the construction scam were dead). But the terms of the loans were so vague that they were able to convince the courts to allow them to pay a lot less than they would have paid if they had paid the interest rates paid by the government on the bonds it sold to finance the railroads. By one estimate, they saved at least $56 million in interest charges -- about $1.5 billion in today's dollars.
In 1893, the year most of the railroads that had received land grants went bankrupt, the Great Northern Railway completed its line from St. Paul to Seattle. Built without any subsidies, the railway was built in segments, with each segment financed by the profits from the previous one. First (under the name of St. Paul, Minneapolis & Manitoba Railway), it went from St. Paul to the Red River Valley of Minnesota and North Dakota, which was soon producing at least a quarter of the wheat grown in the United States. Then it continued to Minot, North Dakota, opening up even more wheat land.
Then, in 1887, the Great Northern build 640 miles of track from Minot to Helena (and soon after Butte), Montana, giving it access to mines that were then paying monopoly prices to the Northern Pacific (in Helena) and Union Pacific (in Butte). Finally, having changed its name to Great Northern in 1890, it built from Great Falls to Seattle, finishing in January, 1893.
That was just weeks before the economic panic that sent some 200 other railroads into bankruptcy. Great Northern survived that panic because it had been built to haul freight, not to get land grants or construction loans. It was able to earn a profit in the greatest American recession before the Great Depression even though it had to compete with other companies that didn't have to repay their creditors because they were in receivership.
Great Northern showed that a transcontinental railroad didn't have to be a boondoggle. Considering the productive farmlands in Nebraska, Kansas, and California, plus the valuable minerals in Colorado and Nevada, a transcontinental could have been built on the UP-CP route, following the Great Northern model, without government subsidies. It just wouldn't have been completed in 1869.
Nor is there any reason to think that building the railroad early helped promote economic prosperity along its route. The Wyoming and Nevada segments of the rail route remain some of the most desolate regions of the country. Nor was it vital to keeping the country together. There were plenty of railroads connecting the North and South, but they hardly prevented the Civil War. Nor was anyone in California talking about seceding the union if they didn't get a railroad connection to the rest of the country.
Scandalous profits, corruption, and bribery all seem to be a part of mega follies like the first transcontinental railroad. We should remember it today not as a hallmark of American enterprise but as an example of what not to do.
New reports suggest that President Trump is considering granting a Jones Act waiver to allow non-U.S.-flagged ships to transport natural gas from energy-rich parts of the United States to the Northeast and Puerto Rico. He should do so without delay. Granting this waiver would mark not just a triumph of common sense, but also help fulfill President Trump's campaign promise to take on the Washington special interests who profit from laws such as the Jones Act at the expense of American consumers and businesses.
To learn more about this issue both the public and media alike are invited to attend an April 30 event at the Cato Institute that will examine the Jones Act’s impact on Puerto Rico. Featuring Puerto Rico's Secretary of State, the president of the Puerto Rico Economic Development Bank, and other experts, the event will include a discussion of the island's attempt to obtain a Jones Act waiver for the purpose of transporting U.S. natural gas. For further information about both the Jones Act and the Cato Institute’s effort to raise awareness about this burdensome and outdated law please visit cato.org/jonesact.
The EPA and conventional air pollution regulations are back in the news. NPR reported that the seven-member Clean Air Scientific Advisory Committee (CASAC), which provides the EPA with technical advice for National Ambient Air Quality Standards, is “considering guidelines that upend basic air pollution science.” But NPR’s oversimplified depiction of a settled scientific debate ignores real misgivings about the science that has justified the regulations and provides an opportunity to ask questions about the proper role of science in public policy.
The pollutant in question is particulate matter (PM), tiny particles or droplets emitted from power plants, factories, and cars. The EPA contends that PM with diameters smaller than 2.5 micrometers, about 3 percent of the size of a human hair, is the most harmful because the particles can be inhaled deep into the lungs. Along with five other criteria pollutants, the Clean Air Act requires that the EPA periodically prepare an analysis that “accurately reflects the latest scientific knowledge” on the health effects of PM exposure. It must then set air quality standards “requisite to protect the public health…allowing an adequate margin of safety.”
Whether one favors leaning towards caution and setting stringent pollutant standards or is skeptical of the efficacy of air quality rules and worries about the costs of the regulations, PM is important. On the one hand, the supposed harms of PM are high. One (contested) study claimed that 2005 levels of PM caused about 130,000 premature deaths per year, which would put PM as the sixth leading cause of death in the United States after strokes. On the other hand, the regulations are expensive. Between 2003 and 2013, EPA regulations accounted for 63–82 percent of the estimated monetized benefits and 46–56 percent of the costs of all federal regulations. The benefits of reducing PM specifically are 90 percent of the monetized benefits of EPA air regulations, meaning PM rules play an outsized role in the justification for many of the costliest federal regulations.
No matter which side of the debate one is on, it would seem important that the EPA have a rational standard-setting process that properly weighs both the possible reduction in the harms of PM and the potential costs. Unfortunately, that is not the case.
The scientific evidence of the harms of PM is much more uncertain than many observers claim and the conflict over what we do and do not know about the effects of PM has existed for decades. The evidence of negative health effects of PM is primarily two studies published in the 1990s, the Harvard Six Cities Study (SCS) and the American Cancer Society Study (ACS). As I have previously noted,
The SCS has been the subject of intense scientific scrutiny and much criticism because of results that are biologically puzzling. The increased mortality was found in men but not women, in those with less than high school education but not more, and those who were moderately active but not sedentary or very active. Among those who migrated away from the six cities, the PM effect disappeared. Cities that lost population in the 1980s were rust belt cities that had higher PM levels and those who migrated away were younger and better educated. Thus, had the migrants stayed in place it is possible that the observed PM effect would have been attenuated.
Furthermore, a survey of 12 experts (including 3 authors of the ACS and SCS) asked whether concentration-response functions between PM and mortality were causal. Four of the 12 experts attached nontrivial probabilities to the relationship between PM concentration and mortality not being causal (65 percent to 10 percent). Three experts said there is a 5 percent probability of noncausality. Five said a 0-2 percent probability of noncausality. Thus 7 out of the 12 experts would not reject the hypothesis that there is no causality between PM levels and mortality.
The latest installment of the debate between supporters and critics of the SCS and ACS studies is the appointment of Dr. Tony Cox as head of the CASAC and the dissolution of a PM advisory subcommittee. Cox has long criticized the science underlying PM standards and argued that epidemiological studies of pollutants have made causal assertions about PM exposure and health outcomes for which the evidence is weak.
To many epidemiologists the appointment of Dr. Cox is akin to putting a creationist in charge of an advisory panel on evolution. In a recent New York Times op-ed, Dr. John Balmes, a former member of the dissolved PM advisory subcommittee, argued,
There has been little dispute that microscopic particulate matter in air pollution penetrates into the deepest parts of the lungs and contributes to the early deaths each year of thousands of people in the United States with heart and lung disease….[Dr. Cox] has been pushing a narrow statistical approach that would exclude most epidemiological studies from consideration by the EPA in reviews of clean air standards.
But even if we accept Dr. Balmes' view that the science is settled, the EPA’s standard-setting process is deeply flawed. The requirements of the Clean Air Act combined with the attributes of PM ensure that the standards set by the EPA are arbitrary for two reasons.
First, when determining the appropriate level of air quality standards, the EPA cannot consider regulatory costs. In 2001, the Supreme Court ruled that the Clean Air Act “unambiguously bars costs considerations from the [ambient air quality standards]-setting process.” Thus, EPA decisions on pollutant standards can only be about benefits.
Second, the characteristics of PM make it very difficult to identify a suitable standard. PM is a non-threshold pollutant meaning there is no easily identifiable concentration of PM for which any level above it causes harm and any level below it causes no harm. Any concentration of PM other than zero presumably causes some harm, so the exposure standard must be based on other factors.
One logical factor to use to set that standard would be the costs of the regulation, which the EPA is not allowed to consider. Thus, for a non-threshold pollutant in the context of a policy regime in which only the benefits of exposure reduction count, the allowable amount of pollution should be zero. But for political and pragmatic reasons, the EPA cannot set the standards that low; the United States would have to deindustrialize. Instead, the EPA sets the levels at what are essentially arbitrary points.
The establishment of ozone standards, another non-threshold criteria pollutant, between the Bush and Obama administrations illustrates how illogical this process is. Under Bush in 2007, the EPA proposed setting the standard for ozone between 0.070 and 0.075 parts per million (ppm). The scientific justification was the EPA’s interpretation of two clinical studies by Dr. William Adams, which found a reduction in lung function in subjects exposed to 0.080 ppm of ozone. The final regulation had not been issued by the time Obama was inaugurated and in 2010 the new administration proposed lowering the standard to between 0.060 and 0.070 ppm. The justification was still Dr. Adam’s two studies, but the Obama administration reinterpreted those studies and determined that the originally proposed standards were not low enough.
Further confounding the process, Dr. Adams disagreed with both administrations’ interpretations of his findings and argued that his studies did not show any statistically significant relationship between ozone levels below 0.080 ppm and decreased lung function. Two different administrations determined two different standards based on the same studies, and the studies’ author didn’t think his findings justified either standard.
The subjectivity of a process ostensibly based on science raises a question: what is the role of science in public policy? Many seem to believe that “sound science” can and should dictate policy outcomes. Science can inform peoples’ preferences about policies, but science alone cannot dictate which policy outcome to choose. The weighting of costs, benefits, and other normative considerations, such as individual rights and the appropriate use of governmental coercion, require intellectual considerations that are not scientific. Science is a necessary but not sufficient condition for adjudicating public policy questions.
The Clean Air Act and its ban on the use of costs in considering air quality standards implicitly gives “rights” to those who want maximum pollution exposure reduction. Those who would prefer less exposure reduction (either because they believe the evidence for negative health effects from emission exposure is weak or emission reduction is too expensive) have no recourse but to contest the “science” used to rationalize current exposure standards.
Instead of having a never-ending scrum over science, another possibility to resolve environmental quality disputes is to recognize strict environmental rights but allow them to be relaxed in return for compensation. The national SO2 and California NOx emission trading markets are steps in the right direction. But I would go one step further and allow the “cap” in those “cap-and-trade” emissions markets to be changed. Those who would like to increase allowable emissions exposure should be able to pay local air quality regions for that change. And the proceeds should be rebated to all residents on a health-risk adjusted basis.
This type of exchange would allow trades between polluters and the most risk-averse. Without such a policy change we will be stuck with an endless political fight over whose science is more “sound.”
Written with research assistance from David Kemp.