California’s Fiscal Shell Game

Proponents of limited government have long advocated “starve the beast” policies — limiting taxes as a means of limiting government. But do these strategies work? At the federal level, empirical studies by Cato’s own William Niskanen and by Christina and David Romer of the University of California, Berkeley have found that cutting taxes does not lead to reduced spending. Uncle Sam can make up for the lost revenue by selling treasuries.

But what about at the state and local levels, where governments have less ability to augment revenue by borrowing? In a new paper, the son-and-father team of Colin and Matthew McCubbins study tax limitations and their effects in California, the birthplace of the tax limitation effort. In 1978, California voters approved Proposition 13, which rolled back property assessments to 1975 levels, limited annual increases in property assessments to two percent, and required supermajority legislative votes in order to increase some other state and local taxes.

So did these measures work — did they keep total California taxes low and slow the expansion of state government? The McCubbins find that, though tax bills initially fell, real total taxes per capita reached and exceeded their 1978 level by the late 1980s. Meanwhile, state expenditures in real dollars never fell. California’s state and local governments funded the spending by upping sales taxes, adopting new fees, establishing new service district assessments, and taking on more debt. The authors conclude that the result is larger and less accountable government in the Sunshine State.

Ethanol Mandates and Food Prices

Federal law requires that a growing amount of ethanol be blended with U.S. gasoline each year. The mandate is for 11 billion gallons this year; under current law, it will top out at 36 billion gallons by 2022.

Meeting this year’s mandate requires the fermentation of about 4.2 billion bushels of corn, which is roughly one-third of U.S. production (and 5 percent of the world’s caloric production in 2007). As is well known, corn is a major part of the U.S. diet and a growing part of the world’s diet. So what is the price effect on food from having so much of the crop diverted to energy?

Using new supply and demand elasticity estimates, Michael Roberts of North Carolina State University and Wolfram Schlenker of Columbia University calculate that the U.S. ethanol mandate increases world food prices between 20 and 30 percent. These price increases, in turn, result in more conversion of forest into cropland, which reduces any carbon emission reduction benefits from biofuels.

Intel Antitrust

As this issue of Regulation heads to press, there is news that computer chip manufacturer Intel has agreed to a settlement with the Federal Trade Commission concerning some of Intel’s marketing practices. The FTC charged that the practices, which include bundling and loyalty discounts, violate federal antitrust laws because they are intended to keep competing chip makers like Advanced Micro Devices from achieving sufficient economies of scale.

In 1984, University of Chicago law professor (and later a federal judge) Frank Easterbrook published a highly influential paper concerning such antitrust cases. These cases require courts to determine whether or not the disputed marketing arrangement ultimately enhances consumer welfare. Easterbrook noted that, as with most decisions, judges in these cases are susceptible to making both Type I (false positive) and Type II (false negative) errors. A Type I error occurs when the court incorrectly determines that consumers are being harmed by the practice. A Type II error occurs when the court incorrectly determines that no harm is occurring. Easterbrook argued that Type I errors are more costly than Type II errors. Dynamic markets and innovation offset false negative errors — if a permitted practice gives a firm an unfair advantage, competing firms will continue to search for ways to offset that advantage. But nothing offsets the overreach of government and the judiciary in false positive errors — once a pro-consumer practice is deemed illegal, it is off the table (unless a subsequent court reverses the decision). Easterbrook concluded that judges should consult the economic facts and literature very carefully about the suspect market arrangements and declare a violation only when all the evidence is unambiguous.

Consider the FTC’s charges against Intel in light of Easterbrook’s paper. Intel’s discounts started in 1999; so how has AMD fared since then? In a new working paper, George Mason University law professor Joshua Wright argues that AMD’s and Intel’s market shares have not really changed since then, and that Intel’s cumulative abnormal returns since then are slightly negative. He goes on to say that these facts, combined with the cumulative evidence in the literature that loyalty discounts are pro-consumer, would suggest that there is no basis for the FTC case within the error-correction framework proposed by Easterbrook.

In another recent working paper, University of Michigan law professor Dan Crane examines the same case but makes a different argument: the complaint against Intel would require the chip maker to include some of its fixed research and development costs in the prices it charges for every processor. This would reverse decades of antitrust law that uses pricing below marginal cost as a necessary condition for predation. And it would convert the computer chip industry into a quasi-public utility in which average-cost recovery is guaranteed and innovation and risk taking are retarded.

FDIC Surcharges

Since the Great Depression, the Federal Deposit Insurance Corporation has guaranteed the safety of deposits at member banks. In exchange for the mandated insurance, the banks pay fees to the FDIC that are then placed in the Deposit Insurance Fund, which is used to cover liabilities in the event of bank failures.

The recent financial crisis depleted the fund, prompting the FDIC in September 2009 to collect a special assessment of 5 basis points on each insured institution’s assets minus Tier 1 capital (comprised mainly of common stock and retained earnings). For the first time, the FDIC assessment was a percentage of total assets rather than just insured deposits. Because larger banks tend to raise funds from sources other than consumer deposits, the surcharge appeared to be an attempt to charge larger banks for the “too-big-to-fail” protection that the federal government extended them during the financial crisis.

However, this extra charge was limited. The FDIC capped the surcharge so that it could not exceed 10 basis points of insured deposits. Thus the cap was operative if insured deposits were less than 50 percent of a bank’s liabilities. In a recent working paper, Scott Hein of Texas Tech and coauthors calculate that, of the largest 19 banks that underwent the Treasury Department’s 2009 stress testing, nine saved $609 million from the cap. Citigroup alone saved $204 million.

Good Monetary Policy or Luck?

After the agonizing stagflation that plagued the U.S. economy in the 1970s, many macroeconomic data exhibited decreased volatility from 1984 until 2007 — a period that has come to be known as the Great Moderation. Why did it occur?

Conventional wisdom credits Paul Volcker and his successors at the Federal Reserve with better monetary policy that controlled inflation and supplied stimulus (and applied the brakes) as needed. However, some economists now argue that the Great Moderation was more the result of good luck than good Fed policymaking (and the 1970s were more the result of bad luck than bad policy). An early paper in this literature, by Christopher A. Sims and Tao Zha, appeared in the March 2006 American Economic Review. A more recent working paper by Jesús Fernández-Villaverde, Pablo A. Guerrón-Quintana, and Juan Rubio-Ramírez goes so far as to claim that “our reading of monetary policy during the Greenspan years is that it was not too different from the policy in the Burns-Miller era; it just faced much better shocks.”

BMI over Time

Conventional wisdom holds that Americans have grown more obese over the last 25 years, in part because we eat too much restaurant food. Michael Anderson and David A. Matsa critique the latter part of that narrative elsewhere in this issue (see “Restaurants, Regulation, and the Supersizing of America,” p. 40). Critiques of the former part start with the work of Jeffery Friedman, a molecular genetics professor at Rockefeller University. Friedman is well known for discovering leptin, the hormone that regulates food intake in humans.

Friedman argues that, though there has been a dramatic change over the last 25 years in the number of Americans classified as “obese,” there has not been nearly as dramatic a change in the typical person’s weight. A person is considered obese, he notes, if that person’s body mass index (BMI) is greater than 30. (BMI is computed by dividing a person’s weight in kilograms by the square of the person’s height in meters.) Friedman notes that, in 1991, the average American had a BMI of 26.7 — not very far from the threshold for being considered obese. A small or no change in the average weight of most people, coupled with much larger changes in the weight of very heavy people, resulted in a dramatic increase in the percentage of the population considered obese.

Friedman’s argument is supported by a new working paper by John Komlos of the University of Munich and Marek Brabec of the Czech Academy of Sciences. Komlos and Brabec analyze the U.S. BMI data by birth cohort from 1882 to 1986, breaking each cohort into 10 centile groups (lightest 10 percent, next-lightest, and so on up to the heaviest 10 percent). They argue that BMI has been gradually increasing for the last 100 years. In addition, the weight gains across centiles of the distribution are not uniform. For 50-year-old U.S.-born white men, the lightest 20 percent have had no increase in BMI in the last 55 years. There have been modest increases in the middle 50 percent, and much larger increases in the top 30 percent. The data on the rate of increase in BMI (the first derivative) are even more right-skewed. The rate of increase in BMI for native-born white men has been decreasing for the bottom 70 percent of the distribution since 1975, but has been increasing for the top 30 percent.

Credit Cards vs. Merchants

Merchants pay fees to banks and Visa and MasterCard for the processing of credit card and debit card payments. Many of these merchants argue that the fees are onerous, and the merchants have organized politically to resist. Their efforts bore fruit in the recently passed financial reform bill, which contains a provision instructing the Federal Reserve to issue rules that regulate the fees associated with debit cards.

But are merchants, in fact, overcharged? In a new working paper, George Mason University law professor Todd Zywicki documents that in the pre–credit card world of the late 1960s, when stores ran their own layaway and credit card systems, stores had an average loss on credit sales of 3.4 percent. In contrast, in 2008 merchants paid $27.5 billion in fees to Visa and MasterCard, while charge-offs were $50 billion. That is, by outsourcing the credit function to banks, merchants sold $50 billion in goods to purchasers who did not repay the bank, but the merchants only had to pay $27.5 billion for that protection. In the pre-Visa system, the entire $50 billion loss would have been borne entirely by the merchants.

Readings

  • “A War on Obesity, Not the Obese,” by Jeffrey M. Friedman. Science, February 7, 2003.
  • “An Antitrust Analysis of the Federal Trade Commission’s Complaint against Intel,” by Joshua D. Wright. June 2010. Available at http://​papers​.ssrn​.com/​s​o​l​3​/​p​a​p​e​r​s​.​c​f​m​?​a​b​s​t​r​a​c​t​_​i​d​=​1​6​24943.
  • “Do Tax Cuts Starve the Beast? The Effect of Tax Changes on Government Spending,” by Christina D. Romer and David H. Romer. Brookings Papers On Economic Activity, Spring 2009.
  • “Identifying Supply and Demand Elasticities of Agriculture Commodities: Implications for the U.S. Ethanol Mandate,” by Michael J. Roberts and Wolfram Schlenker. NBER Working Paper No. 15921, April 2010.
  • “Limiting Government: The Failure of ‘Starve the Beast’,” by William A. Niskanen, Cato Journal, Vol. 26, No. 3 (Fall 2006).
  • “On Large Bank Subsidies from the Federal Deposit Insurance Corporation,” by Scott E. Hein, Timothy W. Koch, and Chrislain Nounamo. April 2010. Available at http:// papers​.ssrn​.com/​s​o​l​3​/​p​a​p​e​r​s​.​c​f​m​?​a​b​s​t​r​a​c​t​_​i​d​=​1​5​97822.
  • “Predation Analysis and the FTC’s Case against Intel,” by Daniel A. Crane. May 2010. Available at http://​papers​.ssrn​.com/​sol3/ papers.cfm?abstract_id=1617364.
  • “Proposition 13 and the California Fiscal Shell Game,” by Colin H. McCubbins and Matthew D. McCubbins. December 2009. Available at http://​papers​.ssrn​.com/​sol3/ papers.cfm?abstract_id=1548024.
  • “Reading the Recent Monetary History of the U.S., 1959–2007,” by Jesús Fernández-Villaverde, Pablo A. Guerrón-Quintana, and Juan Rubio-Ramírez. NBER Working Paper No. 15929, April 2010.
  • “The Economics of Payment Card Interchange Fees and the Limits of Regulation,” by Todd J. Zywicki. June 2010. Available at http://​papers​.ssrn​.com/​sol3/ papers.cfm?abstract_id=1624002.
  • “The Limits of Antitrust,” by Frank H. Easterbrook. Texas Law Review, Vol. 63, No. 1 (1984).
  • “The Trend of BMI Values by Centiles of U.S. Adults, Birth Cohorts 1882–1986,” by John Komlos and Marek Brabec. July 2010. Available at http://​papers​.ssrn​.com/​sol3/ papers.cfm?abstract_id=1649363.
  • “Were There Regime Switches in U.S. Monetary Policy?” by Christopher A. Sims and Tao Zha. American Economic Review, Vol. 96, No. 1 (March 2006).