Electricity Markets

  • “Large-Scale Battery Storage, Short-Term Market Outcomes, and Arbitrage,” by Stefan Lamp and Mario Samano. SSRN Working Paper no. 3844751, May 2021.

Wind and solar sources of electricity are not dispatchable. That is, their output cannot be varied to match demand in real time, which is required in an alternating current electricity system. The implication is that as these renewables’ share of electricity production increases, so will the requirement for natural-gas-fired generation to serve as backup. Thus, the shift to renewable production has a hidden fossil fuel component that detracts from any environmental benefits provided by renewable sources.

An alternative non-fossil-fuel backup for renewable generation instability is large-scale lithium-ion battery storage. These are the batteries used in electric cars like the Tesla. There were 1,236 megawatt-hours (MWh) of battery storage in the United States at the end of 2018. For comparison, the average new natural gas combined-cycle generation plant had 800 MW of capacity in 2016, which would equal the output of all U.S. battery storage in an hour and one-half.

This paper examines the role of lithium-ion battery storage in electricity supply in California, which had 47 of the 172 such U.S. facilities from May 2018 through February 2020. The authors asked: Do such storage facilities discharge when load is high? Do they charge when wholesale prices are low and sell when they are high as optimal arbitrage would predict? And do storage facilities lower the equilibrium prices in electricity markets? They conclude that the facilities make decisions whose outcomes are highly consistent with those of price-takers that maximize arbitrage opportunities and reduce prices by about 31¢ per MWh, or 0.8%.

In the paper’s model, battery storage was predicted to be highly profitable, netting $2.5 million over 10 years. But the actual California data suggest 10-year losses of $800,000. Without other sources of revenues, profit-maximizing firms will not enter this market. — Peter Van Doren

Conventional Air Pollution

  • “Why Are Pollution Damages Lower in Developed Countries? Insights from High-Income, High-Particulate-Matter Hong Kong,” by Jonathan Colmer, Dajun Lin, Siying Liu, and Jay Shimshack. SSRN Working Paper no. 3896141, August 2021.

Reducing exposures to particulate matter (PM) — whether 10 microns in size or smaller (PM10) or “fine” particulate matter of 2.5 microns or smaller (PM2.5) — is believed to deliver significant health benefits. For instance, PM2.5 reduction accounts for 90% of the estimated benefits of the U.S. Environmental Protection Agency’s conventional air pollution regulations. (See “The EPA’s Implausible Return on Its Fine Particulate Standard,” Spring 2013.)

The EPA’s estimates of the mortality benefits from PM2.5 reduction come from two studies: the American Cancer Society Study and the Harvard Six Cities Study. Both have been the subject of much methodological criticism. (See “The Fight over Particulate Matter,” Cato-at-Liberty blog, April 22, 2019.)

Economists have responded to the methodological weaknesses of these observational studies by investigating the results of natural experiments in which people are exposed to pollutants in a manner that is plausibly random and the resulting health effects are observed. One such research design involves random changes in prevailing wind direction that briefly expose different populations of people to pollutants. I described one such study in the Winter 2015–2016 Working Papers column and another in Spring 2017.

A new study of PM10 effects examines birthweight and infant mortality outcomes in Hong Kong from 2001 through 2014. Hong Kong offers the unusual combination of both high pollution and high income. Hong Kong’s particulate matter levels are close to those of mainland China, India, and Pakistan, yet its per-capita income levels compare to the United States. Average gestational exposure to PM10 levels in Hong Kong was 54 micrograms per cubic meter per day (μg/​m3/​day), the minimum exposure was 30μg/​m3/​day, and the maximum exposure was 94μg/​m3/​day. The World Health Organization suggests that the annual average of PM10 exposure should not exceed 20μg/​m3/​day. To put that in perspective, the average in 2019 in Los Angeles County was 19μg/​m3/​day.

The source of plausible exogenous variation in PM exposure is thermal inversions in which warmer air aloft traps pollutants in cooler air closer to the surface. The study calculates the number of thermal inversions during the 270 days of gestation. Inversions are more common in winter, so the study controls for monthly and neighborhood location fixed effects. Conditional on these controls, the incidence of thermal inversions is plausibly exogenous.

The study finds that higher gestational particulate matter exposure is associated with significant reductions in birthweight and significant increases in low birthweight. These marginal PM–birthweight effects are substantial; a 10μg/​m3 increase in particulate matter is associated with the equivalent of the estimated effects of smoking 15 cigarettes per day during pregnancy.

But there are no increases in infant mortality. The authors speculate that Hong Kong’s wealth and health institutions facilitate post-natal health interventions that offset pollution exposure effects. The authors write: “Conventional wisdom suggests that marginal pollution damages are high in less-developed countries because they are highly polluted. We provide early evidence that marginal particulate matter damages are high in less-developed countries because they are more polluted and because they are less developed.” — P.V.D

Pharmaceutical Innovation and Antitrust

  • “Paying Off the Competition: Market Power and Innovation Incentives,” by Xuelin Li, Andrew W. Lo, and Richard T. Thakor. SSRN Working Paper no. 3870420, June 2021.

In the early 2000s, many pharmaceutical companies paid potential generic competitors to delay their entry. In 2013, in Federal Trade Commission v. Actavis, the Supreme Court ruled that such agreements were subject to FTC antitrust jurisdiction. Such a ruling was arguably unexpected because lower courts had approved such “pay-for-delay” contracts.

The plausibly exogenous nature of the court decision allows researchers to examine how brand-name pharmaceutical company innovation responded. The paper gathers detailed data on public pharmaceutical firms and their drug development portfolios from 2005 to 2016 to construct a firm-specific measure of the amount of generic drug competition each branded incumbent faces.

Prior to the 2013 ruling, incumbent firms responded to potential entry from direct competitors by reducing their innovation activity and initiating a smaller number of new drug trials. After the ruling, incumbent firms increased their number of new drug trial initiations and decreased the number of suspensions of existing projects in response to generic entry filings. Once incumbent firms could no longer lengthen the term of their monopoly power through legally risky pay-for-delay agreements, they expanded innovation activities. — P.V.D.

Zoning

  • “The Impact of Local Residential Land Use Restrictions on Land Values Across and Within Single-Family Housing Markets,” by Joseph Gyourko and Jacob Krimmel. NBER Working Paper no. 28993, July 2021.

Edward Glaeser and Joseph Gyourko did landmark work in the early 2000s on the negative effects of zoning on housing supply. (See “Zoning’s Steep Price,” Fall 2002.) In this paper, Gyourko and coauthor Jacob Krimmel provide an important update.

The intuition behind the research is based on the “law of one price,” also known as the “law of arbitrage.” In a completely unregulated market, there should be no difference in the value that an existing homeowner or homebuilder places on an extra square foot of land. That is, if the value an existing homeowner puts on having a bit more land (known as the intensive margin value in economics) is less than the value a builder places on the same amount of land with the right to build on it (known as the extensive margin value), then the owner-occupier should subdivide and sell the vacant land to the builder. Unless there are regulations preventing that increase in density, there should be no gap between land values on the intensive and extensive margins.

The innovation in this paper comes from access to data on sales of vacant land to builders. The earlier work had to infer land values by subtracting construction costs from sales of parcels with houses on them. The new data provide direct observation of prices paid for individual parcels of vacant land purchased by builders for single-family housing units. The data are for 2013–2018 for 24 major U.S. metropolitan areas: Atlanta, Boston, Charlotte, Chicago, Cincinnati, Columbus, Dallas, Deltona (FL), Denver, Detroit, Los Angeles, Miami (FL), Minneapolis, Nashville, New York City, Orlando, Philadelphia, Phoenix, Portland (OR), Riverside–San Bernardino (CA), San Francisco, San Jose, Seattle, and Washington, DC. There are 3,640 observations on vacant parcels purchased with the intention of building single-family housing units across these markets. The paper found at least 20 valid vacant land purchases for single-family development over the 2013–2018 period that also were within 30 miles of the centroid of each metropolitan area.

The extensive value of land for developers is calculated by dividing the value of a parcel by the number of homes planned by the builder. For example, a 54.5‑acre parcel near Atlanta with 96 planned homes sold for $6,479,937, so the extensive margin value of land per intended housing unit is $67,499 ($6,479,937 ÷ 96), or $2.73 per square foot.

The intensive land value is calculated through a regression estimated from 1,000 observations of sales of properties over 2013–2018 that are physically close (averaging within a mile) to the vacant parcel site. The home sale prices are regressed on the lot size, home size, house age, age-squared, and dummy variables for multi-story, townhome, and census tract. From the coefficient on lot size, the paper estimates the intensive margin price per square foot as $1.72. The average lot size of the 100 new houses closest to the vacant land and built during 2013–2018 was 16,866 square feet, which multiplied by $1.72 results in an intensive land lot value of $29,010. The estimate of the zoning tax is the difference between the extensive lot value of $67,499 and the intensive lot value, which comes to $38,489.

The paper then standardizes these estimates for quarter-acre lots. The gap between extensive and intensive margin land values of a quarter-acre plot of land is about $400,000 in San Francisco, ranges between $150,000 and $200,000 in Los Angeles, New York, and Seattle, and is over $100,000 in San Jose. The zoning tax is $60,000–$80,000 in Chicago, Philadelphia, Portland, and Washington, just under $50,000 in Boston, and $35,000–$40,000 in Miami and Riverside–San Bernardino. In the other markets, the median zoning tax per quarter-acre lot is less than $25,000, which the authors describe as negligible to small. — P.V.D.

Securities Regulation

  • “Regulatory Costs of Being Public: Evidence from Bunching Estimation,” by Michael Ewens, Kairong Xiao, and Ting Xu. NBER Working Paper no. 29143, August 2021.

In 1975, the United States had 4,927 publicly traded firms. That number rose over the next two decades, peaking at 7,576 in 1997. But by the end of 2018, the United States had only 3,613 listed firms. Why the decline?

In the Spring 2020 Working Papers column, I reviewed a paper that argued the cause was the increased importance of intangible rather than physical assets in business as well as more liberal treatment of private partnerships, which could only have 100 investors in 1982 but 2,000 after 2012.

Another possibility, not considered in that paper, is increased regulation of public firms. The current paper describes three regulatory thresholds for the market value of the publicly traded stock of a firm ($25 million, $75 million, and $500 million) below which firms face lower disclosure and compliance costs. The data clearly show a “bunching” of firms below these regulatory thresholds achieved through the substitution of debt for equity, without changing their operations or insider ownership. That suggests it is valuable for firms to avoid the disclosure and compliance costs. (See Winter 2016–2017 Working Papers for a prior use of this technique.)

Using private as well as Securities and Exchange Commission sources of firm expenditures on regulatory compliance, the median U.S. public firm spends 0.3% of its earnings before interest, taxes, depreciation, and amortization (EBITDA) on enhanced disclosure compliance, 0.9% on tightened internal control, and 2.1% on a combination of disclosure and internal control rules every year. Regulatory costs as a percentage of EBITDA increased from 0.15% before the Sarbanes–Oxley Act of 2002 (SOX) to 0.23% after SOX. Since 2005, though, there has been a steady decline. By 2018, regulatory costs relative to EBITDA have dropped to their pre-SOX levels.

Using data on the 21,066 U.S. venture capital (VC)-backed firms from 1992 to 2018, out of which 1,957 went public, the authors estimate a model of the effect of regulatory costs on the likelihood of going public. They then use the model to estimate the increased number of firms that would have gone public if various regulations had not been enacted and increased costs. Removing SOX only increases average initial public offering (IPO) likelihood after 2000 from 0.95% to 0.96%. This result may appear surprising given that SOX costs are substantial; however, 82% of VC-backed firms would have a public float below the SOX exemption threshold upon IPO. Removing all regulatory costs would increase post-2000 IPO likelihood among VC-backed firms from 0.95% to 1.4%. The average yearly number of VC-backed IPOs over 2000–2018 would increase from 50.2 to 70.6. While the effects are real, removing all regulatory costs after 2000 increases the average yearly number of IPOs over this period by 20.4, which offsets only 22% of the decrease in yearly IPO volume from pre-2000 to post-2000. — P.V.D.

E‑Cigarette Taxation

  • “Intended and Unintended Effects of E‑cigarette Taxes on Youth Tobacco Use,” by Rahi Abouk, Charles J. Courtemanche, Dhaval M. Dave, et al. NBER Working Paper no. 29216, September 2021.

Previous Working Papers columns examined the effects on tobacco use of a ban on e‑cigarette sales to minors (Summer 2016) as well as taxes on e‑cigarettes (Spring 2020). The ban caused tobacco cigarette use among 12- to 17-year-olds to increase. A large tax on e‑cigarettes in Minnesota resulted in more tobacco cigarette use because of a decrease in quitting among adults.

The current paper examines the effects of e‑cigarette taxes enacted by 10 states as well as two counties (Cook County, IL and Montgomery County, MD) on adolescent smoking behavior. They conclude that adolescents reduce their e‑cigarette use because of higher taxation, but 68% of them switched to tobacco cigarettes.

Currently, Congress is considering doubling the federal cigarette excise tax to $2.01 per pack and taxing e‑cigarettes equivalently (a roughly $2.01 tax per 0.7 fluid mL of nicotine, assuming a Juul pod is equivalent to a pack of cigarettes). Using the results of their research, the authors suggest that this would reduce youth e‑cigarette use by 5.5 percentage points but increase traditional cigarette use by 3.7 percentage points. — P.V.D.

Value of a Statistical Life

  • “The Value of a Statistical Life: A Meta-Analysis of Meta- Analyses,” by H. Spencer Banzhaf. NBER Working Paper no. 29185, August 2021.

The most important facet of most regulatory benefit–cost analyses is the value assigned to preventing the loss of a life. But specifying what that value should be is a difficult political task and a fiendishly complicated economic chore. In this working paper, Georgia State University economist Spencer Banzhaf offers some estimates by drawing on previous efforts to determine this value.

Since most regulations seek to improve safety or working conditions, most of the benefits accrued have to do with lives saved. Economists have produced hundreds of studies that attempt to estimate the value that people themselves place on their lives by the decisions they make, such as the wage premium they demand to take riskier jobs and how much they are willing to pay for safety devices that reduce risk. Economists also conduct surveys that ask people to contemplate such tradeoffs, which we call Contingent Valuation studies.

In general, regulatory agencies and executive-branch departments would like to use a large Value of a Statistical Life (VSL) because that would help them justify more regulations from a benefit–cost perspective. From a public choice perspective, the agencies tend to want to produce more regulations. On the other hand, higher VSLs result in businesses and consumers spending increasingly more on safety and necessarily sacrificing other things they may prefer. For instance, a city that wants to ensure that every house can be reached by an emergency responder within five minutes of a call may need to increase taxes or else spend less on roads, schools, or parks in order to meet that standard.

Constraining federal agencies is Executive Order 12866, issued by President Bill Clinton, that requires executive branch agencies to do a benefit–cost analysis for each major regulation (those that have an economic impact of over $100 million). That analysis must pass muster with the Office of Information and Regulatory Affairs (OIRA).

There is no one standard VSL used across government agencies. The Environmental Protection Agency has spent the most time and effort studying the issue, and it has financed numerous studies on the issue over the last two decades. Banzhaf identifies 800 “unique estimates” that he includes in his analysis. These VSLs vary widely, from a few hundred thousand dollars to nearly $20 million.

There are now a number of meta-analyses that attempt to elucidate a representative number from all this research, but these also have a broad range of meta-estimates, ranging from $3.7 million to $12.3 million. Two decades ago, a meta-analysis by Januz Mrozek and Karen Taylor produced an estimate of roughly $3 million. Several executive branch agencies chose to use that number at the time, but the EPA — which had adopted a number almost three times higher than that — chose instead to finance more studies on the issue, and those later studies ultimately supported its choice of a much higher VSL.

Banzhaf argues that despite the wealth of literature surrounding this topic, most of this work has been more or less ignored by the agencies. He reports that several of them arrived at their VSLs via the use of a small number of studies, some of which are decades old. Two decades ago, when I worked for OIRA, I reached this same conclusion when I conferred with agencies to help them with their VSL choices.

It’s tempting to say the agencies cherry-pick the studies that support the number they want to use. Banzhaf is more charitable, pointing out that the literature is complicated even for an economist, and is especially difficult for the non-economist policymakers who must ultimately make this decision. Relying on a small number of studies at least makes the decision tractable.

Banzhaf suggests that we need to devise a way to simplify how the agencies approach this. His solution is to construct, in essence, a meta-analysis of the meta-analyses, and to devise a way for the agencies to use the studies collected in all of the meta-analyses to estimate a number based on their methodological preferences, or to update their VSLs by easily incorporating new studies into the mix.

His model generates a mean VSL of $7 million — about 25% less than what the EPA and other agencies use — and his confidence interval ranges from $2.4 million to $11.2 million.

He is right that the sheer quantity of VSL studies is both a blessing and a curse; as someone who spent nearly a year doing almost nothing but reading VSL studies for OIRA two decades ago, I can attest to that. His approach offers a way for agencies to simplify how they approach their adoption of a VSL but without focusing on a small number of studies. What’s more, this may very well be the first step toward a future where the agencies and OIRA agree to coalesce on a single estimate that truly reflects the preferences of society and makes federal regulations more rational and cost effective. — Ike Brannon

Trade and Domestic Manufacturing

  • “Do Not Blame Trade for the Decline in Manufacturing Jobs,” by Stephen J. Rose. Center for Strategic and International Studies Report, October 4, 2021

Anarrative that both Democrats and Republicans seem to agree upon is that unfair trading practices by China and other U.S. trade partners have destroyed millions of well-paying American jobs and helped impoverish the nation’s middle class.

In a new CSIS policy analysis on trade, Urban Institute scholar Stephen Rose does not defend China’s trade practices, but he also does not find that trade guilty of immiserating the middle class or destroying U.S. manufacturing jobs. Instead, he identifies a different culprit for the loss of those jobs: rising productivity growth. He points out that manufacturing productivity increased by 600% from 1980 to 2020. In steel, considered a key industry for manufacturing, productivity went up even faster: it took over 10 hours of labor to produce a ton of steel in 1980 but just 1.5 hours last year.

This is a good thing. Being able to produce more goods with fewer workers means the displaced workers can undertake other activities that benefit the economy. Not everyone sees it that way, though, and Schumpeter’s notion of creative destruction is anathema to many people who say they want to protect the middle and working classes.

Rose does not see the reduction in manufacturing as an economic problem to be solved. He notes that the loss in manufacturing jobs — 7.5 million fewer people worked in manufacturing in 2020 than in 1980 — was swamped by the creation of 40 million new service jobs during that time. Most other developed countries also saw manufacturing employment fall over this period, so this isn’t solely a U.S. phenomenon. Rose also observes that the areas of the country that have lost the most manufacturing jobs in the last 40 years — New England and the Mid-Atlantic — currently have the most robust economies in the United States.

He draws on his previous research to show that what has become an almost universally accepted narrative — that middle- and working-class workers are earning less than they did two generations ago — is unsupported by the data. Rose estimates that male average compensation went up by as much as 50% from 1979 to 2014. He specifically criticizes economists Thomas Piketty, Emmanuel Saez, and Gabriel Zucman, whose work has played a crucial role in pushing this false narrative, for obfuscating reality. His observations debunk the conventional wisdom that Americans have suffered a decline in living standards over the last six decades:

Real GDP per capita in 1960 was one-third the value in 2019; life expectancy was eight years less; houses were smaller; and amenities such as air conditioning were rare. In terms of pay, median-income blue-collar workers in manufacturing in 2019 were paid nearly 50 percent more than their inflation-adjusted median in 1960.

Because of such observations, Rose is not championed by many politicians in either major American party. Democrats resent him for undercutting their message of economic exploitation of the working class. Republicans are wary of him because he offers progressive solutions such as stronger unions and a higher minimum wage to the problem of working-class poverty. Regardless, his work sheds light on a contentious issue and helps everyone who reads him to better understand the forces that truly are affecting the U.S. economy. If we are to have political fights over what needs to be done, we need both sides to understand and agree on what really is occurring in the economy. — I. B.