201904

April 25, 2019 9:27AM

The True Winners and Losers of Financial Regulation: Remarks at the New York League of Independent Bankers

Earlier this month, I had the pleasure of delivering some remarks to the New York League of Independent Bankers. I spoke about how and why financial regulation often has consequences that are very different from the ones that policymakers intend. What follows are my written remarks — I hope you enjoy reading them.

Finding myself in New York before a group of community bankers, I cannot help but think of George Bailey, the lead character in the 1946 movie It's A Wonderful Life.

Bailey is of course the manager of Bailey Bros.' Building and Loan, the local bank in Bedford Falls, New York. He's also the very picture of the upstanding community banker: generous, altruistic, and always ready to sacrifice himself for his family and neighbors. When Bailey falls on hard times and is contemplating suicide, his guardian angel can show him multiple examples of how the world would be a worse place without him.

For decades, that movie shaped popular perceptions of the good that banking can do. It offered a welcome contrast to the all-too-common stereotype of banking as a business of questionable social value, altogether separate from "the real economy." The people in this room scarcely need reminding that, were it not for all the services that banks provide at a comparatively low cost — diversification, payments, safekeeping, the transfer of funds across time and place, and more — life would be harder, less secure, and less comfortable.

Structural changes in the U.S. banking landscape

Yet, watching It's A Wonderful Life in 2019, one also wonders whether George Bailey could run his little local thrift today. In some ways, of course, his life might be less difficult: deposit insurance means most bank customers don't rush to bank offices in times of stress — although some did, in America, Britain, and elsewhere, at the height of the last financial crisis.

But a number of long-term changes in the U.S. banking landscape make me skeptical that the Bailey Bros.' Building and Loan could remain a thriving, independent institution in 2019.1 Some of these changes are benign: Economies of scale, technological innovation, and the removal of branching restrictions since the late 1970s have ushered in major bank consolidation.

Media Name: Graph-1-Diego-IBL.jpg

Where there existed more than 14,000 FDIC-insured commercial banks in the mid-1980s, there are now just under 5,000.

That doesn't mean access to banking services has declined; in fact, for most people it has increased, with the number of bank offices more than double what it was in the 1960s. Were Bailey around today, he would probably find himself vying for his neighbors' custom with the Bedford Falls branches of Chase, Citibank, and Bank of America.

But it would be naïve to suggest that the consolidation of the last four decades is just the consequence of increased competition and other market phenomena. On the contrary, along with the spread of branching and information technology, one of the strongest secular trends in banking since 1970 is the steady increase in government regulation. According to the Mercatus Center, the number of regulatory restrictions and mandates related to credit intermediation quadrupled between 1970 and 2010, from around 10,000 to just over 40,000.

Media Name: Graph-2-Diego-IBL-2.jpg

Source: Mercatus Center

That, of course, was before the passage of Dodd-Frank. Mercatus researchers estimate that the post-crisis legislation on its own added more than 27,000 new restrictions to the rulebook, calling it "one of the biggest regulatory events ever."

Media Name: graph-3-diego.png

Source: Mercatus Center

George Bailey would struggle to recognize the 21st-century banking landscape. And he might also struggle to hold on to his bank. Below are the FDIC's 2018 data on the return on assets and shareholder equity among different-sized banks. As the table shows, banks with less than $100 million in assets — of which the FDIC alone regulates 1,278 — have rates of return 20 to 40 percent below those of larger institutions. Their return on equity, at an average of 7.59 percent, is also considerably lower than the 10 percent figure that equity analysts consider healthy.

As it turns out, being a small bank in 2019 is sometimes not such a wonderful life.

With regulation as with so much else, there is a strong status quo bias: we assume that whatever is today will remain forever. One of the most entertaining parts of my research at Cato is going through the archives to see what regulators, industry players, and policymakers in the past thought would be the future of banking. Invariably, all overstated the permanency of the status quo and discounted the possibility of radical transformation.

We assume the regulation that exists today will be there tomorrow. Many among us also assume that all financial regulation is there for a good reason: after all, new rules typically follow bad experiences during times of stress. Experts identify the policies and behaviors that they believe caused these problems, come up with what they deem to be appropriate fixes, and enact them for what they consider to be the good of the public. Problem solved. That, at least, is the conventional account of financial regulation.

But is it true? Consider Dodd-Frank, or, to use the legislation's formal name, the Wall Street Reform and Consumer Protection Act. Most people would agree that the overwhelming policy issue that the financial crisis uncovered was the "too-big-to-fail" problem — the existence of an implicit government bailout guarantee for the largest financial institutions. Indeed, many sections of Dodd-Frank mandate new regulations for the industries and products at the epicenter of the crisis: insurance, credit default swaps, orderly liquidation, and SIFI capital buffers, for instance. But more than a few people would doubt the contention that post-crisis regulation has successfully removed that implicit bailout guarantee. Even before 2008, the U.S. financial system, weighed down by 40,000 separate regulations, was anything but a "Wild West." That fact alone warrants further skepticism that more rules are the necessary and sufficient fix.

Another reason to doubt the conventional narrative around financial regulation is that the process of regulatory design is much messier than what that narrative suggests. Financial regulation, especially in the high-stakes political environment of 21st-century America, where government accounts for roughly 38 percent of GDP, is hardly the exclusive province of academics and public-spirited technocrats. It's a tug-of-war involving interest-group pressures, reciprocal favors between politicians, rent-seeking, and political grandstanding — along with some well-intentioned advocacy, to be sure. Yet even then, good intentions are no guarantee of satisfactory outcomes.

Intentions vs. Consequences in Financial Regulation

Broadly, there are four motivations for most financial regulation:

  • prudential regulation, or what has misleadingly come to be called "financial stability";2
  • consumer protection from fraud and abuse;
  • national security and the prosecution of crime;
  • industrial interventions, whether to promote competition or to restrict it.3

Most financial legislation throughout America's history can be attributed to one or more of those four motivations. The National Banking Acts of 1863 and 1864, for instance, mainly belong in category 4, although its proponents might also have cited 1 as another reason to enact them. The 1913 Federal Reserve Act is associated primarily with category 1, because many people attribute the frequency and severity of financial crises to the absence of a lender of last resort.4 The 1970 Bank Secrecy Act falls under category 3, while the Dodd-Frank package passed in 2010 includes measures related to 1, 2, and 4.

Those are the motivations behind financial regulation, but that doesn't ensure that regulation ends up achieving its desired aims. The National Banking Acts and subsequent Civil War legislation made it increasingly difficult for state-chartered banks to compete with nationally chartered banks. The Federal Reserve Act gradually removed the right of note issue from all U.S. banks. But the presence of a lender of last resort did not mitigate the widespread panics and bank failures of the Great Depression. Deposit insurance, enacted in 1934 to prevent bank runs, has had the unintended effect of providing banks with a cheap funding source, and international evidencesuggests that the more generous the deposit insurance system, the more risk banks are willing to take on.

That past financial regulation has had effects different from, and sometimes even contrary to, the ones its authors intended gives additional reason to doubt that "this time will be different." Nor is it likely, unfortunately, that the process of regulating will become any "cleaner" or less vulnerable to interest-group abuse. If anything, the proliferation of regulation leads to a growth in the number of vested interests, who have much to lose from removing regulation. I'd like to discuss three examples of such regulatory entrenchment from my own research, which will hopefully resonate with you: the Community Reinvestment Act (CRA), the Bank Secrecy Act (BSA), and the use of prudential capital buffers.

Community Reinvestment Act

Passed in 1977, the CRA required depository institutions — except for credit unions — to lend in the areas where they collected deposits. At the time, anti-competitive restrictions on branching, along with statutory ceilings on the rates that banks could pay on customer deposits, meant that banks had little incentive to satisfy all the profitable credit demand in their communities. A 40-year legacy of redlining had also caused a paucity of data on the value of collateral in certain (typically minority) neighborhoods, making credit underwriting more difficult. In that context, a regulatory mandate for lending may have been defensible, although even at the time, the relevant regulators found the CRA to be an imperfect means of addressing redlining.

Four decades later, the American banking system has changed dramatically, and mostly for the better. Greater local competition has made credit rationing unattractive. Interest-rate ceilings are long gone, meaning new banks can lure customers away from incumbents by offering more competitive terms. Finally, despite the regrettable persistence of residential segregation — now driven by socioeconomic rather than institutional factors — a diversity of providers, both banks and nonbanks, has emerged to cater to the needs of historically marginalized groups.

These auspicious trends are rendering the CRA increasingly obsolete. You would think that this would provide an impetus for reforming and even repealing this legislation. After all, there are other rules in place — such as the Equal Credit Opportunity Act (ECOA) and the Home Mortgage Disclosure Act (HMDA) — to prevent and punish individual instances of discrimination in credit provision.

To its credit, the OCC last year launched a review of CRA enforcement, and the FDIC and Fed have recently joined these efforts. But all available evidence suggests that any changes in regulatory supervision will be modest — possibly involving the replacement of the CRA's current system of vague qualitative assessments with a more predictable quantitative score. Any consideration of repeal, however, seems out of the question, despite accumulating evidence that the CRA has, in some cases, harmed banks' safety and soundness. In other cases, research shows that the CRA has caused credit to flow not to those most in need (low-income and minority borrowers) but to the best credits in CRA-eligible assessment areas (people like me, who have average or above-average incomes but live in gentrifying neighborhoods).

With $4.5 trillion worth of CRA lending commitments between 1992 and 2007, and with an explicit requirement that regulators consider CRA ratings when banks apply to merge or expand, the CRA has become a big business for activist groups. They lobby banks and regulators for promises of "community development" funding, and they protest vociferously when their requests go unheeded. Not surprisingly, these groups are the ones most bitterly opposed to any CRA reform.

Bank Secrecy Act

The BSA aims to combat illicit finance. Its anti-money-laundering/know-your-customer (AML/ KYC) provisions have gradually grown in depth and scope, notably with the passage of the USA PATRIOT Act in the wake of 9/11. Being concerned with law enforcement and national security, the BSA is a set of financial regulations to which policymakers across the political spectrum are particularly sensitive.

However, as BSA-related rules have increased in number, they seem to be yielding diminishing returns, even as their costs mount. A 2018 survey of community banks by the St. Louis Fed found the BSA to be the most onerous financial regulation, accounting for nearly a quarter of bank compliance costs. A study by the Heritage Foundation estimated aggregate BSA-related compliance costs at somewhere between $4.8 billion and $8 billion.

Last year, financial institutions filed more than 5 million reports of suspicious financial activity — but only about a million of those seem to have been prompted by major security concerns, such as money-laundering, cybersecurity risks, and terrorism. Nearly two million have vague tags, such as "other suspicious financial activity." The relationship between BSA reporting and criminal prosecutions is also somewhat tenuous: the number of money-laundering investigations by the IRSand FBI has declined as BSA reports have escalated. In fact, candid off-the-record conversations with law enforcement veterans often reveal that BSA reports typically played a marginal role in their investigations.

Yet the BSA remains in place. Not only that, but as the nominal dollar thresholds for reporting transactions haven't changed since 1970, many more routine financial operations are caught in the BSA's net than legislators had originally intended.

The status quo has some obvious and powerful beneficiaries: law enforcement authorities, understandably, are eager to get their hands on as much transaction data as possible. After all, it might conceivably be useful in the future, and the cost of reporting isn't borne by law enforcement, but by the private sector. Politicians also benefit from supporting the Act, as it allows them to appear tough on crime without committing taxpayer dollars to those efforts.

The losers are banks and their customers, some of whom may have their transactions flagged even though they are guilty of no wrongdoing. The BSA also carries with it the major yet often ignored cost of financial privacy. Competition and financial innovation suffer as well, since the Act's heavy compliance costs make entry into banking less attractive, and banking innovation potentially riskier: consider an innovative new product — say, a blockchain-based payment services provider — that might potentially be used by wrongdoers. The costs of developing and marketing this product, and other technologies like it, would hardly seem worth the likely outcome of regulatory rejection.

Capital requirements

Capital buffers are an essential form of prudential risk management by banks. There is reason to believe that, even in a free market where regulatory capital requirements didn't exist, banks would still have a strong incentive to hold significant amounts of capital. Capital reserves are the collateral that creditors require to lend to the bank at reasonable rates.

But so long as we're far from that hypothetical free market, decisions on capital buffers are likely to come from regulators. That doesn't mean capital planning has to be a complicated exercise, involving two dozen different measures, calibrated by internal models, verified by public authorities, and subject to periodic tinkering by the Basel Committee and the Federal Reserve. In fact, post-crisis studies strongly indicate that simple leverage ratios without any risk weights were better predictors of impending failure before 2008 than the supposedly more scientific models prescribed by Basel. That research, by the way, comes not from free-market think tanks but from the venerable Bank of England, among others.

Small community banks tend to prefer simple leverage ratios. The unsuccessful Financial CHOICE Act, along with the less ambitious but successful Economic Growth, Regulatory Relief, and Consumer Protection Act (EGRRCPA), which passed last year, both included an off-ramp from complicated capital rules in the form of a single leverage ratio. The one passed by Congress in the EGRRCPA is limited to institutions with $10 billion or less in assets, and the level of the ratio is yet to be set by regulators. But it's a promising start.

However, larger banks are skeptical of a simple leverage rule. Part of their wariness is understandable: having spent considerable resources complying with the complicated set of existing requirements, they fear the disruption of moving to a new system of assessment. Furthermore, who can assure them that the single leverage measure won't gradually creep upward over time? Yet another part of their skepticism is also due to their understanding that complicated capital measures can be tinkered with, lobbied for and against, and ultimately gamed. For large and complex institutions with a great deal of political influence, such a playing field can be exceedingly attractive.

The Road to Financial Regulatory Reform

These examples — the CRA, the BSA, and the prudential capital regime — illustrate that attempts for meaningful change in financial regulation almost always meet with fierce resistance. But regulation needs to change, because the way we save, borrow, insure, and invest today is much different from the way we used financial services in the past.

The pressure to remove and change outdated regulation often comes from outside. Consider the rise of Uber, which has made a stronger case for the abolition of taxi medallions and regulated pricing than a thousand public policy economists could have mustered before its advent. Similarly, the effort to reconsider the enormous burden of banking mandates and restrictions has become more urgent with the rapid rise of nonbanks and fintech companies. These challengers are competing with banks while eschewing some of the more highly regulated aspects of the banking business. In addition, they're showing that innovation in areas such as credit-scoring, underwriting, and marketing can achieve the goals of regulation — consumer protection, adequate risk management, and effective competition — without the need for rigid regulatory mandates.

The growing number of partnerships between fintechs and banks is likely to further hasten the review of existing regulations for their fitness. I'd like to aim for a model of activity-based regulation — that is, one where similar regulations apply to institutions performing similar functions — and where policymakers recognize the efficiency of competitive markets in addressing customer needs and preferences. To paraphrase Adam Smith, the founder of the science of economics, "It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest [TripAdvisor rating]."

The portrait of the community banker depicted in It's A Wonderful Life emphasized the close relationship between George Bailey and his customers. The regulator is a distant presence — and a nagging one at that, as demonstrated when the carelessness of George's uncle almost drives Bailey Bros.' Building and Loan into bankruptcy and an OCC inspector arrives. Yet policymakers have long since forgotten that the George Baileys of the world, if given appropriate incentives, can serve the public much more effectively than distant regulators can. Contrary to popular perception — and again paraphrasing Adam Smith — creative and competent individuals can often better promote the public interest from the private sector than from within government. Indeed, I'd wager that, if the generous and public-spirited George Bailey were around today, he'd be running a small innovative bank or a fintech company.

Thank you.

1 Cato colleagues have rightly pointed out that thrifts such as Bailey Bros. largely disappeared after the savings and loan crisis in the 1980s. For our purposes, however, it's the size of Bailey's institution — not its status as a thrift rather than a community bank — that matters.

2 I write "misleadingly," because the term "financial stability" suggests that any change in the current structure of banking — further consolidation, integration of different financial services activities, and periodic bank failures — are undesirable. That is emphatically not the case.

3 It is worth noting that much bank regulation historically also had a fiscal motive. For example, the National Banking Acts, by tying note issue to a bank's holdings of U.S. Treasury securities, helped to pay for the Civil War.

4 My CMFA colleague George Selgin, however, argues that the true motivation behind this specific solution, which a U.S. federal central bank devised, was in fact to maintain the privileges of correspondent banks, mainly in New York City, while mitigating the impact of liquidity crunches on unit banks around the country.

Click here for a PDF version of this speech

April 24, 2019 5:19PM

Agencies Charged with Enforcing Immigration Laws Incarcerate Immigrants, Unsurprisingly

The Department of Homeland Security (DHS) and the Department of Justice (DOJ) recently released a report on immigrants incarcerated in the federal Bureau of Prisons (BOP) and as pretrial detainees by the U.S. Marshals Service (USMS).  The report offers some comments on state and local incarceration of non-citizens, but no systematic information.  BOP and USMS are both agencies within the DOJ, so it is simpler to look at the numbers for the DOJ altogether.

The DHS and DOJ are two agencies charged with enforcing immigration laws and incarcerating those who violate them, so it is unsurprising that a large percentage of those incarcerated in federal prisons are there for violating immigration offenses.  According to the report, about 19 percent of those incarcerated in the BOP or held by the USMS are known or suspected illegal immigrants and about 6 percent are legal non-citizens.  The remaining 75 percent are U.S. citizens, but some unknown percentage of them are likely immigrants too.  Non-citizens are about 7 percent of the entire U.S. population so they are overrepresented in federal prison.  

The report breaks down the primary offenses that non-citizens are incarcerated or held for in federal custody.  The most common primary offense was immigration at 38 percent, followed by drug offenses at 37 percent.  Other crimes comprise the remaining 25 percent.  The report does not show the number of primary offenses committed by illegal immigrants.  Through the 3rd quarter of 2018, about 33.7 percent of new offenders were sentenced for immigration offenses according to the U.S. Sentencing Commission.  Turns out that non-citizens are more likely to be sentenced for immigration offenses, which is not surprising.

More importantly, the federal prison population and those held by the USMS are not representative of incarcerated populations nationwide, so excluding them from the report means that it sheds little light on nationwide incarcerations by nativity, legal status, or type of crime.  Of the roughly 2.3 million people incarcerated in 2018, only about 8.3 percent were in federal prisons or held by USMS while the rest are in state and local facilities.    

Federal crimes are also vastly different from state crimes, so the criminals incarcerated in the federal system are very different from those on the state level.  Through the 3rd quarter of 2018, 50,929 people were sentenced to federal prison for federal crimes – 33.7 percent for immigration crimes.  Those immigration convictions comprised 100 percent of the convictions for immigration crimes in the United States in 2018 through the 3rd quarter.  By contrast, there were only 94 federal convictions for murder or manslaughter during the same time.  Although the data for murders in 2018 are not released yet, those federal murder convictions will likely account for less than 1 percent of all murders nationwide if past years are any guide.  For instance, if Mollie Tibbets accused killer is convicted then he’ll be in state prison and not counted in the federal homicide conviction statistics.  

It’s important to understand the number of crimes caused by illegal immigrants, their criminal conviction rates, and their incarceration rates.  But doing so requires examining state-level data in addition to federal data so looking at only the latter produces a non-representative and inaccurate picture of the problem.  Based on the limited evidence that we have, illegal immigrants are less crime-prone than native-born Americans but more crime-prone than legal immigrants.   

April 24, 2019 11:45AM

President Trump Considering a Jones Act Waiver

New reports suggest that President Trump is considering granting a Jones Act waiver to allow non-U.S.-flagged ships to transport natural gas from energy-rich parts of the United States to the Northeast and Puerto Rico. He should do so without delay. Granting this waiver would mark not just a triumph of common sense, but also help fulfill President Trump's campaign promise to take on the Washington special interests who profit from laws such as the Jones Act at the expense of American consumers and businesses.

To learn more about this issue both the public and media alike are invited to attend an April 30 event at the Cato Institute that will examine the Jones Act’s impact on Puerto Rico. Featuring Puerto Rico's Secretary of State, the president of the Puerto Rico Economic Development Bank, and other experts, the event will include a discussion of the island's attempt to obtain a Jones Act waiver for the purpose of transporting U.S. natural gas. For further information about both the Jones Act and the Cato Institute’s effort to raise awareness about this burdensome and outdated law please visit cato.org/jonesact.

April 24, 2019 10:17AM

Congress Can’t Delegate Power It Doesn’t Already Have

The Framers of the Constitution, fearful of establishing a tyrannical government, were cautious about placing too much power in the hands of one person or assembly. They thus split federal power between the legislative, executive, and judicial branches. This separation of powers also prevents the government from ridding itself of responsibility by granting too much power to other branches or entities.

When Congress gives significant power to another body—whether an executive agency or an interstate compact—it is delegating authority. But congressional authority is limited to the powers listed in Article I of the Constitution, and it certainly can’t grant power that it doesn’t have.

In 1986, Congress authorized the Metropolitan Washington Airports Authority (MWAA)—an interstate compact—to take over for the federal government in managing Reagan and Dulles Airports and the Dulles Toll Road, all of which are federal assets on federal land. A group of Virginia taxpayers and toll-road users, upset that the MWAA was raising tolls to pay for an extension of the Washington Metro, filed a class action against the MWAA, arguing in part that it was exercising improperly delegated authority. The U.S. Court of Appeals for the Fourth Circuit held that Congress did not delegate authority to the MWAA and so there was no separation of powers problem. Bizarrely, it stated that the power to operate airports on federal land is not “inherently federal.”

Yet, as the Supreme Court established in Department of Transportation v. Association of American Railroads (2015), when the government creates an entity, controls it, and the entity serves a federal interest, it is exercising federal power. Since the MWAA’s authority fits this description, Congress certainly granted it federal power. Congress goes against the separation of powers doctrine, however, when it gives power to other bodies just to avoid the time, energy, and hard decisions involved in legislating.

Further, the Fourth Circuit was wrong to draw a distinction between federal and “inherently federal” authority. As the Supreme Court has long understood, federal power is not split between “inherent” or general federal power, but rather between the branches of government. The only types of federal authority are those enshrined in the Constitution: legislative, executive, and judicial. Thus, all federal power is “inherently federal.”

Since Congress possesses only “inherently” federal power, the lower court created uncertainty about whether Congress has the authority to control airports. If it can control airports, the Fourth Circuit was mistaken and Congress delegated inherently federal authority to the MWAA. If Congress didn’t have the authority in the first place, then where does the MWAA get its regulatory power in the first place? Either way, Congress can’t delegate power it doesn’t have.

Cato has now filed an amicus brief supporting the plaintiffs’ request that asking the Supreme Court to resolve uncertainty regarding the definition of federal power. We argue that grants of federal power come from Congress—and that federal power is the only kind of power Congress has. The Constitution demands that each branch shoulder the responsibility entrusted to it, no matter how politically expedient it would be to let another entity bear the burden.

The Supreme Court will likely decide before it breaks for the summer whether to take up Kerpen v. Metropolitan Washington Airports Authority.

April 24, 2019 10:15AM

As Seattle Reels From An HIV Outbreak, Safe Consumption Sites Make More and More Sense

The US Centers for Disease Control and Prevention’s latest Morbidity and Mortality Weekly Report (MMWR) alarmingly reports a 286 percent increase in cases of HIV among heterosexual persons injecting drugs in King County, Washington from 2017 and mid-November 2018. The report recalls a similar outbreak for similar reasons in rural Indiana that took place between 2011 and 2014, and ultimately led the state to enact legislation permitting needle-exchange programs to operate there. 

As I explain in my policy analysis on harm reduction strategies, needle exchange programs have a more than 40 year track record reducing the spread of HIV, hepatitis, and other blood-borne diseases, and are endorsed by the CDC and the Surgeon General, but are prohibited in many states by local anti-paraphernalia laws. But such laws are not the problem in the state of Washington. Needle exchange programs have operated legally there for years.

Safe Consumption Sites have been shown to be even more effective in reducing the spread of HIV and hepatitis, as well as preventing overdoses. The nearby city of Vancouver, BC has found they dramatically reduced cases of HIV as well as overdoses since 2003.

Recognizing this, the Seattle city council voted in 2017 to permit the establishment of two safe consumption sites, which are obstructed by federal law, in particular the so-called “Crack House Statute” passed in the 1980s, which makes it a felony to “knowingly open, lease, rent, use, or maintain any place for the purpose of manufacturing, distributing, or using any controlled substance.” A non-profit group in Philadelphia is attempting to set up a “Safe House” there, and has already been met with the threat of prosecution from the Department of Justice. Former Pennsylvania Governor Edward G. Rendell, a principal of that non-profit, spoke about this at a recent conference on harm reduction held at the Cato Institute.

With safe consumption sites working in more than 120 major cities throughout the developed world, including several in neighboring Canada—and with outbreaks of HIV developing across the US—lawmakers who claim to be deeply concerned about the plague of disease and overdoses afflicting the country should put their money where their mouth is and repeal the outdated “Crack House Statute” so cities and towns can get to work saving lives. 

 

April 23, 2019 12:14PM

Supreme Court Will Decide Whether 1964 Law Bans LGBT Workplace Bias

For 40 years Congress has declined to pass the Employment Non-Discrimination Act , which in recent versions would prohibit private employment discrimination on the basis of sexual orientation and gender identity (I’ve discussed its merits before, noting that “as libertarians recognize, every expansion of laws against private discrimination shrinks the freedom of association of the governed.”) Now, as predicted, the Supreme Court has agreed to resolve a split in the circuit courts over the theory that Title VII of the 1964 Civil Rights Act banned these forms of discrimination all along, and that courts simply didn’t figure that out until recently.

The strongest case for this reading rests on an ambitious, yet not frivolous, plain meaning approach to Title VII’s text. The law bans any discrimination against an employee “because of… sex.” Now suppose that the employer would never fire Ginger for taking a romantic interest in men, but does fire George when it learns that he does so. It has (the argument goes) treated him differently because of his sex. Similar arguments can reach the case of an employee’s gender identity.

Ranged against this line of argument is precedent as well as, should one choose to give it weight, likely legislative intent. In the years after 1964 courts considered but rejected arguments that the law by its terms covered sexual orientation, presumably inadvertently (since almost no one thinks the lawmakers of that era intended such a result). Much later, when it endorsed the new interpretation, the federal Equal Employment Opportunity Commission called the old precedents “dated.” “Dated” might seem like a pejorative term for “well-established,” yet it is true that the Supreme Court in its decision in Price Waterhouse v. Hopkins (1989) did mix things up somewhat by accepting a theory that Title VII covered not just sex but gender “stereotyping.” That might open the door to further evolution in what had not shown itself to be an entirely fixed standard.

The proposed new and broader reading of Title VII has met with mixed success in the circuit courts, creating the split that the high court yesterday agreed to resolve (The three cases are Altitude Express v. Zarda, Bostock v. Clayton County, Georgia, and R.G. & G.R. Harris Funeral Homes v. EEOC). When the Seventh Circuit by an 8-3 en banc vote accepted the broader reading in the case of Hively v. Ivy Tech, its multiple opinions included a memorable contrast between those by Judge Richard Posner, concurring with the majority view, and Judge Diane Sykes for the dissenters. Ken White at Popehat tells the tale:

With rather remarkable frankness, Posner rejects the majority’s attempt to premise the decision on Supreme Court precedent and forthrightly accepts a mantle of what might be called "judicial activism":

I would prefer to see us acknowledge openly that today we, who are judges rather than members of Congress, are imposing on a half-century-old statute a meaning of 'sex discrimination' that the Congress that enacted it would not have accepted. This is something courts do fairly frequently to avoid statutory obsolescence and concomitantly to avoid placing the entire burden of updating old statutes on the legislative branch. We should not leave the impression that we are merely the obedient servants of the 88th Congress (1963– 1965), carrying out their wishes. We are not. We are taking advantage of what the last half century has taught.

That’s an extraordinarily blunt statement of the judicial philosophy that conservatives attack as “legislating from the bench.”

It falls to Judge Sykes in dissent to articulate the case for judicial conservatism and a limited role for courts:

This brings me back to where I started. The court’s new liability rule is entirely judge-made; it does not derive from the text of Title VII in any meaningful sense. The court has arrogated to itself the power to create a new protected category under Title VII. Common-law liability rules may judicially evolve in this way, but statutory law is fundamentally different. Our constitutional structure requires us to respect the difference.

It’s understandable that the court is impatient to protect lesbians and gay men from workplace discrimination without waiting for Congress to act. Legislative change is arduous and can be slow to come. But we’re not authorized to amend Title VII by interpretation. The ordinary, reasonable, and fair meaning of sex discrimination as that term is used in Title VII does not include discrimination based on sexual orientation, a wholly different kind of discrimination. Because Title VII does not by its terms prohibit sexual orientation discrimination, Hively’s case was properly dismissed. I respectfully dissent.

These philosophical divides on statutory interpretation — which of course play out every term in lower-profile cases — are likely to be on the Court’s mind next fall.

April 22, 2019 3:54PM

The Fight over Particulate Matter

The EPA and conventional air pollution regulations are back in the news. NPR reported that the seven-member Clean Air Scientific Advisory Committee (CASAC), which provides the EPA with technical advice for National Ambient Air Quality Standards, is “considering guidelines that upend basic air pollution science.” But NPR’s oversimplified depiction of a settled scientific debate ignores real misgivings about the science that has justified the regulations and provides an opportunity to ask questions about the proper role of science in public policy.

The pollutant in question is particulate matter (PM), tiny particles or droplets emitted from power plants, factories, and cars. The EPA contends that PM with diameters smaller than 2.5 micrometers, about 3 percent of the size of a human hair, is the most harmful because the particles can be inhaled deep into the lungs. Along with five other criteria pollutants, the Clean Air Act requires that the EPA periodically prepare an analysis that “accurately reflects the latest scientific knowledge” on the health effects of PM exposure. It must then set air quality standards “requisite to protect the public health…allowing an adequate margin of safety.”

Whether one favors leaning towards caution and setting stringent pollutant standards or is skeptical of the efficacy of air quality rules and worries about the costs of the regulations, PM is important. On the one hand, the supposed harms of PM are high. One (contested) study claimed that 2005 levels of PM caused about 130,000 premature deaths per year, which would put PM as the sixth leading cause of death in the United States after strokes. On the other hand, the regulations are expensive. Between 2003 and 2013, EPA regulations accounted for 63–82 percent of the estimated monetized benefits and 46–56 percent of the costs of all federal regulations. The benefits of reducing PM specifically are 90 percent of the monetized benefits of EPA air regulations, meaning PM rules play an outsized role in the justification for many of the costliest federal regulations.

No matter which side of the debate one is on, it would seem important that the EPA have a rational standard-setting process that properly weighs both the possible reduction in the harms of PM and the potential costs. Unfortunately, that is not the case.

The scientific evidence of the harms of PM is much more uncertain than many observers claim and the conflict over what we do and do not know about the effects of PM has existed for decades. The evidence of negative health effects of PM is primarily two studies published in the 1990s, the Harvard Six Cities Study (SCS) and the American Cancer Society Study (ACS). As I have previously noted,

The SCS has been the subject of intense scientific scrutiny and much criticism because of results that are biologically puzzling. The increased mortality was found in men but not women, in those with less than high school education but not more, and those who were moderately active but not sedentary or very active. Among those who migrated away from the six cities, the PM effect disappeared. Cities that lost population in the 1980s were rust belt cities that had higher PM levels and those who migrated away were younger and better educated. Thus, had the migrants stayed in place it is possible that the observed PM effect would have been attenuated.

Furthermore, a survey of 12 experts (including 3 authors of the ACS and SCS) asked whether concentration-response functions between PM and mortality were causal. Four of the 12 experts attached nontrivial probabilities to the relationship between PM concentration and mortality not being causal (65 percent to 10 percent). Three experts said there is a 5 percent probability of noncausality. Five said a 0-2 percent probability of noncausality. Thus 7 out of the 12 experts would not reject the hypothesis that there is no causality between PM levels and mortality.

The latest installment of the debate between supporters and critics of the SCS and ACS studies is the appointment of Dr. Tony Cox as head of the CASAC and the dissolution of a PM advisory subcommittee. Cox has long criticized the science underlying PM standards and argued that epidemiological studies of pollutants have made causal assertions about PM exposure and health outcomes for which the evidence is weak.

To many epidemiologists the appointment of Dr. Cox is akin to putting a creationist in charge of an advisory panel on evolution. In a recent New York Times op-ed, Dr. John Balmes, a former member of the dissolved PM advisory subcommittee, argued,

There has been little dispute that microscopic particulate matter in air pollution penetrates into the deepest parts of the lungs and contributes to the early deaths each year of thousands of people in the United States with heart and lung disease….[Dr. Cox] has been pushing a narrow statistical approach that would exclude most epidemiological studies from consideration by the EPA in reviews of clean air standards.

But even if we accept Dr. Balmes' view that the science is settled, the EPA’s standard-setting process is deeply flawed. The requirements of the Clean Air Act combined with the attributes of PM ensure that the standards set by the EPA are arbitrary for two reasons.

First, when determining the appropriate level of air quality standards, the EPA cannot consider regulatory costs. In 2001, the Supreme Court ruled that the Clean Air Act “unambiguously bars costs considerations from the [ambient air quality standards]-setting process.” Thus, EPA decisions on pollutant standards can only be about benefits.

Second, the characteristics of PM make it very difficult to identify a suitable standard. PM is a non-threshold pollutant meaning there is no easily identifiable concentration of PM for which any level above it causes harm and any level below it causes no harm. Any concentration of PM other than zero presumably causes some harm, so the exposure standard must be based on other factors.

One logical factor to use to set that standard would be the costs of the regulation, which the EPA is not allowed to consider. Thus, for a non-threshold pollutant in the context of a policy regime in which only the benefits of exposure reduction count, the allowable amount of pollution should be zero. But for political and pragmatic reasons, the EPA cannot set the standards that low; the United States would have to deindustrialize. Instead, the EPA sets the levels at what are essentially arbitrary points.

The establishment of ozone standards, another non-threshold criteria pollutant, between the Bush and Obama administrations illustrates how illogical this process is. Under Bush in 2007, the EPA proposed setting the standard for ozone between 0.070 and 0.075 parts per million (ppm). The scientific justification was the EPA’s interpretation of two clinical studies by Dr. William Adams, which found a reduction in lung function in subjects exposed to 0.080 ppm of ozone. The final regulation had not been issued by the time Obama was inaugurated and in 2010 the new administration proposed lowering the standard to between 0.060 and 0.070 ppm. The justification was still Dr. Adam’s two studies, but the Obama administration reinterpreted those studies and determined that the originally proposed standards were not low enough.

Further confounding the process, Dr. Adams disagreed with both administrations’ interpretations of his findings and argued that his studies did not show any statistically significant relationship between ozone levels below 0.080 ppm and decreased lung function. Two different administrations determined two different standards based on the same studies, and the studies’ author didn’t think his findings justified either standard.

The subjectivity of a process ostensibly based on science raises a question: what is the role of science in public policy? Many seem to believe that “sound science” can and should dictate policy outcomes. Science can inform peoples’ preferences about policies, but science alone cannot dictate which policy outcome to choose. The weighting of costs, benefits, and other normative considerations, such as individual rights and the appropriate use of governmental coercion, require intellectual considerations that are not scientific. Science is a necessary but not sufficient condition for adjudicating public policy questions.

The Clean Air Act and its ban on the use of costs in considering air quality standards implicitly gives “rights” to those who want maximum pollution exposure reduction. Those who would prefer less exposure reduction (either because they believe the evidence for negative health effects from emission exposure is weak or emission reduction is too expensive) have no recourse but to contest the “science” used to rationalize current exposure standards.

Instead of having a never-ending scrum over science, another possibility to resolve environmental quality disputes is to recognize strict environmental rights but allow them to be relaxed in return for compensation. The national SO2 and California NOx emission trading markets are steps in the right direction. But I would go one step further and allow the “cap” in those “cap-and-trade” emissions markets to be changed. Those who would like to increase allowable emissions exposure should be able to pay local air quality regions for that change. And the proceeds should be rebated to all residents on a health-risk adjusted basis.

This type of exchange would allow trades between polluters and the most risk-averse. Without such a policy change we will be stuck with an endless political fight over whose science is more “sound.”

Written with research assistance from David Kemp.