Corporate Accounting

“SOX after Ten Years: A Multidisciplinary Review,” by John C. Coates and Suraj Srinivasan. October 2013. SSRN #2343108.

The Sarbanes-Oxley Act of 2002 (SOX) was enacted after the bankruptcies of Enron and WorldCom. Enron was ranked seventh on the Fortune 500 in April 2001; by December, it had filed for bankruptcy. WorldCom was ranked 80th at the peak of its market capitalization in 1999; it filed for bankruptcy in July 2002. Both bankruptcies were associated with accounting fraud and SOX was passed as a congressional response before the 2002 elections.

Academic criticism of SOX has been intense. In an article in this journal (Winter 2005–2006), Yale law professor Roberta Romano described the law as “Quack Corporate Governance.” She went on to say,

An extensive empirical literature suggests that those mandates were seriously misconceived because they are not likely to improve audit quality or otherwise enhance firm performance and thereby benefit investors as Congress intended. In the frantic political environment in which SOX was enacted, legislators adopted proposals of policy entrepreneurs with neither careful consideration nor assimilation of the literature at odds with the policy prescriptions. … The more general implication is the cautionary note that legislating in the immediate aftermath of a public scandal or crisis is a formula for poor public policymaking (at least in the context of financial market regulation).

Given that literature, I read this paper by John Coates (professor of law and economics at Harvard) and Suraj Srinivasan (associate professor at Harvard Business School) with great interest. Their conclusion is that SOX’s direct costs can be calculated easily but its benefits and indirect costs cannot, so evaluations of the law are based largely on authors’ prior assumptions about benefits. Despite the intense early criticism by academics, most informed observers now conclude that, as implemented, SOX’s costs and benefits are roughly equal or net positive.

One of the main concerns of scholars was that SOX would federalize corporate law and reduce the important role of state corporate charter competition. Coates and Srinivasan find that of 1,293 Delaware corporate law decisions from 2002 to 2012, only 15 referred to SOX and not one imposed liability on directors for failing to live up to SOX.

Another concern was that SOX would change financial regulation from disclosure to prescriptive command-and-control. But the authors conclude that it is a “comply or explain” regime and, surprisingly, many firms still disclose weaknesses only upon restatement of earnings rather than because of SOX. In their view, SOX still does not induce enough disclosure.

Did SOX drive listings to foreign exchanges or cause firms to delist, as some critics predicted? No, because many foreign countries adopted similar standards. Smaller, less liquid, more fraud-prone firms did indeed exit U.S. public markets, but they would not have been subject to SOX anyway. Initial public offerings (IPOs) started declining before SOX, and the exemption of small firms from onerous SOX regulations under the 2010 Dodd-Frank Financial Reform Act has not resulted in a subsequent increase in IPOs. Fewer foreign firms have cross-listed on U.S. and foreign exchanges, but those that do cross-list are larger and more profitable, so net capitalization of foreign firms in the United States is higher post-SOX.

Has SOX created benefits? Earnings management and restatements have gone down. One innovative study examined the stock market reaction to firms that lobbied for and against SOX. Firms that lobbied against were larger, more profitable, retained more cash, and had lower future growth opportunities. They experienced abnormal positive returns during passage that did not dissipate over time. The authors interpret this as evidence of managerial mismanagement of free cash flow.

Bank Capital Regulation

“Market-Based Bank Capital Regulation,” by Jeremy Bulow and Paul Klemperer. September 2013. SSRN #2317043.

The bank failures caused by the Great Recession have prompted numerous proposals to increase bank capital requirements. For some examples, see the paper by Anat Admati et al. that I discussed in my Winter 2010–2011 “Working Papers” column and the paper by Viral Acharya et al. that I discussed in Summer 2012. In this paper, Jeremy Bulow and Paul Klemperer offer the most recent paper in this genre.

Several stylized facts motivate their analysis: Regulatory capital is a poor measure of banks’ financial health. Some banks that are certified as having adequate capital fail shortly thereafter. Investment tail risk is shifted to taxpayers. The current capital regime is procyclical. Banks have too much capital during booms and too little during recessions. The need to raise equity during recessions acts as a tax on new investment during hard times. And finally, even a 20 percent capital requirement (much higher than proposed by prominent reformers) would not have prevented bank failures during the Great Recession; the losses at the 400 government-insured banks that failed between 2008 and 2011 averaged 24.8 percent of assets.

Bulow and Klemperer propose a new banking regime in which banks offer 100 percent reserve banking and at-risk maturity transformation banking without taxpayer risk. Deposits once insured by the Federal Deposit Insurance Corporation would now be invested in treasuries (i.e., narrow banking). All unsecured debt would be in the form of “equity resource notes” so that if the bank stock price ever went below some threshold (say 25 percent of the price when the debt was issued), the note holder would receive stock in the bank instead of a cash dividend. The bonds would not be permanently converted to equity; rather the owners would receive stock instead of cash, one dividend payment at a time. Reconversion of the equity resource notes to normal debt would occur if the market price of the bank stock later rose above the contractual threshold. Unlike other “catastrophe bond” proposals, the conversion to equity would be based on market prices rather than the breach of easily manipulated regulatory capital standards.

Health Expenditures and Bankruptcy

“Health and Financial Fragility: Evidence from Car Crashes and Consumer Bankruptcy,” by Edward R. Morrison, Arpit Gupta, Lenora M. Olson, Lawrence J. Cook, and Heather Keenan. November 2013. SSRN #2353328.

Before becoming a U.S. senator, Elizabeth Warren was a professor at Harvard Law School. Her intellectual reputation came from research showing that health care cost shocks play a disproportionate role in personal bankruptcy. That research showed that substantial medical bills (over $5,000) were found in half of the bankruptcies studied. The research gave support to those who supported the Affordable Care Act as a remedy for the lack of health insurance by some Americans.

Those stylized facts, of course, are not proof that adverse health conditions cause bankruptcy. It is possible that personal financial management behavior and underlying risk preferences simultaneously cause the development of both costly health conditions and financial default. So the observed relationship between health care costs and bankruptcy could be the result of selection rather than causal effects.

The authors use a data base of police reports on all car crashes in Utah from 1992 through 2005, and link them to hospital admissions and bankruptcy cases. They find that the bankruptcy rates of the people involved in car crashes who suffered severe injuries that required emergency room admission were 30 to 50 percent higher in every year prior to the crash than those who only had minor injury. Unobservable driver characteristics increase both the risk of a severe accident and bankruptcy. A household’s exposure to a severe car crash is endogenous to a household’s underlying and unobserved characteristics.

In every year prior to the crash, bankruptcy rates were 30 to 50 percent higher for people who suffered severe injuries than people with minor injuries.

To avoid the non-random selection of people into severe accidents, the authors propose a difference-in-difference research design that exploits differences in the timing of the crashes. The assumption is that the driver’s pre-crash financial condition is uncorrelated with the timing of the crash as long as the timing is short (one or three years). The treatment group is those who had crashes in the years 1999–2002 and the control group is those who had crashes in 2002–2005. The authors found no difference between the pre- and post-crash bankruptcy rates between the treatment and control groups. Thus, large medical costs have no causal effect on personal bankruptcy rates in the context of Utah car crash victims.

Health Care Expenditures

“Physician Beliefs and Patient Preferences: A New Look at Regional Variation in Health Care Spending,” by David Cutler, Jonathan Skinner, Ariel Dora Stern, and David Wennberg. August 2013. NBER #19320.

Price-adjusted Medicare expenditures per beneficiary vary across metropolitan areas from $7,000 to $14,000 per year. Why? One possibility is patient demand. Another is physician preferences. John Wennberg, the intellectual father of the evidence-based medicine movement at Dartmouth’s Geisel School of Medicine, has always thought that physician preferences rather than scientific evidence played a large role. In this paper, David Cutler (professor of economics at Harvard), Wennberg’s son David (Dartmouth medical school), Dartmouth economist Jonathan Skinner, and Kennedy School of Government doctoral candidate Ariel Dora Stern attempt to document the relative role of patient demand and physician preferences through two surveys merged with geographically coded Medicare expenditure data for the last two years of life. The authors survey doctors (using actual patient vignettes that test for implementation of clinical guidelines) and patients (random sample of Medicare beneficiaries) to ascertain their preferences about hypothetical end-of-life decisions and find that doctor preference variation accounts for more of the expenditure variation than patient preference variation.

Cardiologists were asked questions about the treatment of two Class-IV (the most severe) heart failure patients who are symptomatic even when at rest. The vignettes were designed to demonstrate clearly to the reader that neither patient was a candidate for further surgery such as angioplasty, stents, or bypass. “Comforters” were defined as those doctors who would discuss palliative care always or almost always in these cases. “Cowboys” were defined as those doctors who would recommend intensive surgery most of the time. Some 29 percent of cardiologists were comforters and 27 percent were cowboys. Across 64 large metropolitan areas, the higher the percentage of physicians classified as cowboys, the higher the two-year end-of life Medicare expenditures. Increasing the percentage of cowboys by 10 percentage points increases end-of-life expenditures by 7.5 percent. In contrast, there was no relationship between increased aggregate patient demand for intensive treatment and expenditures. The authors conclude from their empirical model that if there were no cowboys and all physicians recommended palliative care, end-of-life expenditures would decrease by 36 percent and total Medicare expenditures would be reduced by 17 percent.

Unemployment Insurance

“Unemployment Insurance Experience-Rating and Labor Market Dynamics,” by David D. Ratner. January 2014. SSRN #2376364.

Readers of Regulation likely are familiar with the work of University of Chicago economist Casey Mulligan about the role that the incentive effects of transfer payments played in increasing the unemployment rate during the Great Recession and its aftermath. This paper makes similar arguments about the role of experience-rating in unemployment insurance.

Unemployment insurance benefits are paid for by taxes on employers set by the states. The taxes are not fully experience-rated; that is, employers with a high rate of layoffs do not pay taxes sufficient to cover the benefits for their discharged employees, while employers with a low rate of layoffs pay taxes that are larger than the benefits received by their discharged employees. The degree of experience-rating varies across states and thus the marginal tax on employers for layoffs varies across states.

In this paper, Federal Reserve Board economist David Ratner utilizes the variance in experience rating across states to calculate the effect of increased experience-rating. The average state marginal tax cost of a layoff to employers (the present discounted value of benefits paid back in future higher taxes) is 54 percent of the benefits paid to a firm’s employees. Job reallocation (the sum of job creation and job destruction) falls linearly as the marginal cost to employers of a layoff increases. Job destruction goes down because employers understand that layoffs increase future unemployment taxes. Job creation also decreases because employers anticipate the possibility of layoffs and subsequent higher taxes when they hire someone. Ratner concludes that increasing experience-rating by 5 percent would decrease layoffs by 2 percent but decrease job creation by 1.5 percent, resulting in a net decrease in unemployment of 0.21 percentage points. However, if the system were fully experienced-rated and employers paid 100 percent of the costs of the benefits to a laid-off employee, job destruction would decrease by 17 percent while job creation would decrease by 13.7 percent, resulting in a net increase in employment.

The Great Recession has resulted in a debt of $20 billion that the state unemployment trust funds must pay back to the federal government. Ratner demonstrates that if those funds are repaid through an increase in experience rating, unemployment will decrease because of better incentives. In contrast, if the debt is repaid through a flat-rate tax increase with no change in experience-rating, unemployment would increase because of the perverse incentives on employers.

Housing Markets

“The 1992 GSE Act and Loan Application Outcomes,” by Shawn Moulton. Journal of Housing Economics, forthcoming.

A prominent component of the conventional wisdom about the housing bubble, particularly for conservatives, is that government affordable-housing policies fueled the bubble and its collapse. Specifically, those folks criticize the Community Reinvestment Act that Congress enacted in 1977 and lending goals imposed on the government-sponsored enterprises (GSEs) Fannie Mae and Freddie Mac in 1992. The policies supposedly led to an increase in high-risk mortgages and their subsequent default. Peter Wallison voiced those claims in a recent New York Times op-ed (January 6, 2014).

In previous “Working Papers” columns (Spring 2011 and Fall 2012) I described papers that challenge this widely held belief. Those papers used research discontinuity designs, taking advantage of the eligibility standards created in those laws. The arbitrary standards (a loan is credited with advancing the affordability goal if the borrower’s income or neighborhood is just below 60 percent or 80 percent of his Metropolitan Statistical Area’s (MSA) median income, but does not qualify if his income is just above the same thresholds) serve the same function as random assignment in an experiment. The only things that vary between people just above and just below the qualifying legal thresholds are randomly distributed and thus any discontinuities in outcomes, like defaults or percentage of applicants accepted, can be attributed to affordability standards. The papers found no evidence of any threshold effects. There were no discontinuously worse loan results among loans that just qualified as CRA- or GSE-compliant relative to loans that did not.

In this paper, Shawn Moulton, now an economist at Abt Associates, casts additional doubt on the importance of housing affordability goals in the housing bubble. First, he notes that actual GSE purchases have always eclipsed the goals set for them, which suggests that the goals did not make the GSEs deviate from their preferred lending strategies. Second, he examined individual mortgage data from 1996–1997 (shortly after the GSE goals went into effect) and 2006–2007 (during the height of the housing mania). He examines three outcomes for differences around the eligibility cutoffs: percentage of loans granted, percentage of loans purchased by a GSE, and interest rates relative to comparable Treasury securities. The only effect he finds is that loans to customers whose income was just below 60 percent of median MSA income were 1.1 percentage points more likely to be purchased by the GSE in 2006–2007. The 95 percent confidence interval is 0.16 percentage points to 2.01 percentage points.

The author employs some arithmetic to convey a sense of the potential magnitude of this effect. In the data set there are 1,077,048 GSE-eligible originations whose borrowers earn between 40 and 60 percent of the median MSA income. Suppose the upper end of the confidence interval (2.01 percentage points of all eligible loans) were induced by the affordability goals. That would mean that 21,648 additional loans were bought by the GSEs. According to Moulton, in the second quarter 2009 (at the height of loan defaults), 25.35 percent of subprime loans were delinquent. If that percentage of the additional 21,648 loans lost all their value (average loan amount of $92,400), the losses would have been $500 million. Because total write-downs of losses were over $500 billion during 2008, affordability goals—even with those generous assumptions—explain only 0.1 percent of the losses.

Bank Accounting

“Market Reactions to Policy Deliberations on Fair Value Accounting and Impairment Rules During the Financial Crisis of 2008–2009,” by Robert M. Bowen and Urooj Khan. September 2013. SSRN #2327732.

During the financial crisis, many commentators argued that accounting rules mandating that loans be written down from face value to “market” value exacerbated the contagion among financial institutions. That is, financial institutions that had to sell off assets to meet capital or margin requirements spread their troubles to other institutions if the latter were forced to mark down similar assets because of the fire-sale prices received by the former. If, instead, accounting rules would allow such assets to be valued at initial or book value, contagion would stop because forced asset selloffs to meet regulatory capital requirements would not have to occur.

This paper assesses the role of accounting rules on the market value of bank stocks during 10 event windows in the fall of 2008 and spring of 2009, during which the relaxation of mark-to-market bank accounting was discussed by regulatory authorities. The paper examines how bank stock values reacted to those discussions. If investors thought relaxation would be good for banks, then they would bid up bank stock values during event windows in which relaxation seemed likely and bid down values when the status quo seemed likely. Conversely, if investors thought rule relaxation made the withholding of information they valued more likely, then investors would bid down bank stocks during event windows in which rule relaxation was discussed and bid up values when the status quo dominated the agenda. The results confirm that investors reacted positively to the possible relaxation of then-existing accounting rules that mandated mark-to-market accounting and reacted negatively to the opposite possibility.