201904

April 9, 2019 9:22AM

More on Commodity Price Targeting

In a previous post, I argued that Paul Volcker didn't put a stop to inflation by having the Fed systematically increase interest rates when commodity prices rose, and lower them when commodity prices fell. While commodity-price targeting, aka a "price rule" for monetary policy, had some prominent proponents back in the 1980s, neither Volcker nor any other Fed chair embraced the idea.

Today's post has to do with two things that I didn't say in that earlier one. I didn't say that commodity price movements played no part at all in the Fed's decision-making. And I didn't say whether they should or shouldn't play any part in it. I plan here to review studies of the actual role commodity price movements have played in the Fed's monetary policy decision-making, as well as ones that ask whether the Fed would do a better job if it let them play a bigger one.

This post is, I warn you, both long and very wonkish. But if the Fed is to consider once again the possibility of basing its monetary policy decisions on the behavior of commodity prices, it's useful to take account of what a substantial body of previous research has had to say on the topic.

Targets and Indicators

In that research, the distinction between statistics that may serve as indicators of economic conditions to which the Fed might wish to respond and ones it should attempt to target, that is, to keep within specific bounds or on some specific path, is of crucial importance. The claim that  Volcker's Fed relied on a commodity "price rule" implies, not just that it referred to commodity price movements as one of several indicators of monetary conditions, but that it actually endeavored to target  commodity prices.

Although Volcker never targeted commodity prices, it doesn't follow that either he or later Fed chairs made no use of commodity prices as policy indicators. On the contrary: the Fed has often referred to commodity price movements in making monetary policy decisions. What's more, at least one Fed chair appears to have assigned considerable weight to commodity price changes, and to changes in the price of gold especially. That chair was, not Paul Volcker, but Alan Greenspan. I know this because I co-authored a study on the topic with my then-UGA colleague, Bill Lastrapes.[1]

Bill and I affirmed something that Greenspan himself had claimed[2], namely, that during his tenure the Fed treated sensitive commodity prices, and the price of gold in particular, as important indicators of the state of the economy. It was more likely to raise the fed funds rate when either general commodity prices or the price of gold rose, and more likely to reduce the rate when either commodity prices generally or the price of gold fell. Bill and I were, however, careful to note that this "does not necessarily mean that the Fed 'targets' the price of gold, or focuses solely on sensitive commodity prices to guide policy."

Are Commodity Prices Useful Indicators? Should the Fed Target Them?

Did Greenspan's policy make sense? And if it did, might the Fed have done still better, either then or since, by actually targeting a commodity price index, instead of just using it as one of several economic indicators?

Many economists tried to answer these questions during Greenspan's term. And while most concluded that commodity prices were indeed useful economic indicators, they also tended to confirm the conclusion, reached by FRB Kansas City economist C. Alan Garner in the earliest of the studies, "that monetary reforms requiring a close link between commodity prices and money growth are inadvisable."

Garner's 1985 article, like several subsequent ones, was written in response to price-rule proposals of Jude Wanniski and Robert Genetski, among others. Those proposals, Garner notes, called "for using either the price of gold or an index of sensitive commodity prices" not merely as economic indicators but "as an intermediate target of monetary policy." (Wanniski favored a gold price target, whereas Genetski argued for gearing Fed policy adjustments to movements in a broader commodity index.)

The appeal of a commodity price rule is that commodity prices, being set in auction markets, adjust more rapidly than others, and might therefore serve to reveal imminent inflation or deflation before either becomes evident in broader price indexes, like the CPI. However, Garner says, a commodity price rule also suffers from two fundamental drawbacks. First, because "large fluctuations in the relative prices of commodities are not uncommon…a policy response based on changes in commodity prices could have undesirable effects on aggregate output and prices." In practice Fed officials would have to ignore certain commodity-index movements, rather than respond to them, to avoid contributing to the business cycle instead of dampening it. The narrower the commodity set, the greater this problem becomes. For that reason a gold price target would generally be even less reliable than a broader commodity price target.

Second, the loose and uncertain relation between commodity price movements and the Fed's actions also limits a commodity index's usefulness as an intermediate policy target. "The channels through which monetary policy affects commodity prices are complex and circuitous," Garner says. Consequently, "it would be difficult for policymakers to produce the desired movements in an index of sensitive commodity prices."

These drawbacks of a price rule didn't necessarily mean that commodity prices weren't useful as policy indicators or, in Garner's phrase, "information variables." However Garner also found that they "provide only limited information about the future course of the economy," making them insufficiently reliable "to justify a central place in monetary policy." Commodity price movements were a poor leading indicator of changes in real GNP, and an even poorer guide to impending changes in CPI inflation. "It seems best," Garner concluded, "to employ commodity prices as one of several information variables" the Fed used to guide its monetary policy decisions. Garner recommended, in other words, that the Fed stick to using commodity prices just as it had been using them, at least according to my and Bill Lastrapes' research. Significantly, then-Federal Reserve Vice Chairman Manuel Johnson also believed that "commodity prices are probably more valuable as an indicator of monetary policy than as a target."

In 1989 Garner published a more exacting econometric study of the merits of a "commodity price rule," this time specifically addressing the claim that "commodity prices are so closely related to the general price level that achieving the commodity price target range would also control the general inflation rate." That study reached the same conclusions as his earlier one.[3]

In the meantime, several other studies either reached conclusions very similar to Garner's (e.g. DeFina 1988; Webb 1988; Whitt 1988; and Furlong 1989), or found that commodity prices were not even useful policy indicators (e.g. McCallum 1990).

I've located one study only from the period in question suggesting that the Fed might have improved its performance by linking its policy decisions more closely to commodity price movements. According to a 1989 paper, by Brian J. Cody and Leonard O. Mills, and contrary to my and Lastrapes' findings, the Fed had "not responded to commodity price innovations in the past," and that, so long as it attached some weight to stabilizing inflation, it might have improved its performance by using them. But, Cody and Mills add (in a passage that now seems as relevant as ever), even their favorable verdict does not mean that the Fed would have been wise to adopt a commodity price rule.

One issue that we have not examined in this paper is whether the relative merits of a commodity price rule in comparison to other recently advocated rules based on nominal quantities such as the monetary base…or nominal GNP… . It may be that the Fed could improve its policy by simply having greater feedback on any nominal variable in setting nominal interest rates. Advocates of a commodity price rule would argue that such feedback would occur in a more timely manner using commodity prices because there are no data lags or revisions. Whether this advantage, and possibly others, is large enough to favor commodity prices over other nominal variables is a subject for future research.

Getting Colder

So much for studies from the 1980s. A sampling of later studies suggests that the case for leaning heavily on commodity prices has gone from weak to weaker over time, thanks in part to the diminishing role of commodities as a share of U.S. final output.

Thus a pair of Chicago Fed Letters published in November 1993 and December 1994 found that "inflation forecasts based on individual commodity prices and commodity price indexes can be highly misleading" and "that commodity price indexes are not statistically useful in predicting consumer price inflation." According to S. Brock Blomberg and Ethan S. Harris, the authors of a 1995 Chicago Fed study, "none of the channels through which commodity prices signal more generalized inflation" were then "operating as well as they did in the past." More specifically, they found that the relation between changes in commodity price indexes and subsequent movements in the CPI inflation rate had "weakened considerably starting in the mid-1980s" to the point of rendering the indexes useless, if not worse than useless, even as policy indicators.

Still later studies reach similar conclusions. According to a 2010 paper by Florian Verheyen,

While there was a strong link between commodity prices and CPI inflation in the 1970s and the beginning of the 1980s, the relationship has weakened, respectively diminished over time. Today we are unable to detect a reaction of commodity prices to commodity price shocks. Thus, commodity prices might not serve as good indicator variables for monetary policy.

Verheyen adds that his findings "are pretty in line with" those of most other studies published since the mid-1990s. However, at least two studies of the period, a 2003 study by Titus Awokuse and Jian Yang, and a 2007 working paper by Frank Brown and David Cronin, found using different methods that commodity prices still possessed some information value.

Changing Goals

Of course no study of the usefulness of commodity prices as monetary policy guides can ever be regarded as the last word on the subject. That is so because the relationship between commodity price movements and changes in other macroeconomic variables continues to change. But it's so for another reason as well, namely, the fact that our understanding of the ultimate goal of monetary policy has also been changing.

Until recently, economists generally took for granted that monetary policy should treat a stable overall inflation rate either as its sole goal or (as in the Fed's case) as one of a small set of goals. Hence the emphasis that previous studies of the usefulness of commodity price indexes placed on the relation between movements in those indexes and changes in the CPI inflation rate. But increasing numbers of economists are now either endorsing or giving serious attention to the view that the aim of monetary policy should be, not a stable inflation rate, but a stable growth rate of some measure of aggregate nominal spending, such as nominal GDP or nominal Final Sales to Domestic Purchasers.

One of the more obvious reasons for this change of opinion has to do with the implications of productivity innovations, and adverse ones especially. The question is simple: if, say, a war or harvest failure or both (to offer a stark example) were to halve an economy's output, would it make sense for that economy's central bank to tighten money to keep prices from rising? Since goods are in fact less abundant, keeping their prices from rising means reducing the public's earnings so it has less to spend on them. But that in turn means reducing firms' ability to recoup their nominal outlays, and also making it harder for people to pay their debts. Ultimately, although prices don't rise, real output, consumption, and employment decline more than they would were the aggregate spending kept constant. And it's the levels of real variables like output and employment, rather than the price level, that determine peoples' well-being.

Similar arguments hold for less severe adverse supply shocks, and also (though the fact is often overlooked) for positive ones: a decline in the inflation rate below some long-run target is no cause for concern so long as it reflects a proportionate gain in productivity. Why shouldn't we have less inflation, if not occasional deflation, when goods' unit production costs are falling especially rapidly, and more when those costs are falling less rapidly than usual, or rising?

Nor is there any lack of rigorous argumentation supporting these intuitive arguments favoring stable spending over stable inflation as the key to overall macroeconomic stability. I review a number of works, both early and recent, here. And new ones, like this one in the most recent JMCB number by David Beckworth and Josh Hendrickson, and this one by Jim Bullard and Aarti Singh (also forthcoming in the JMCB), are coming out regularly.

The case for maintaining a constant rate of inflation in the face of productivity gains and set-backs is, in comparison, neither intuitively compelling nor grounded in sound micro-economic analysis. Most models that appear to support it either (1) abstract from productivity innovations altogether; (2) unrealistically treat factor prices (including wages) as less "sticky" than output prices (while assuming that output prices are equally unresponsive to both aggregate demand and unit cost innovations); or (3) simply treat the absolute value of the rate of inflation, or the difference between actual and target inflation, as an argument in either agents' utility functions or the central bank's loss function. One only has to point out these often overlooked assumptions to appreciate the shortcomings of the models in question.

The Summer of '08

So let's suppose, for the sake of argument, that instead of aiming for steady inflation the Fed aims for steady NGDP growth. Might a commodity price rule help it to do that? As Cody and Mills suggested some years ago, the matter warrants further study. But I wouldn't count on a positive finding, for commodity prices are especially likely to be influenced by supply-side innovations (such as gold discoveries) or blights. Because of that, a central bank that follows a commodity price rule would almost certainly render NGDP and other measures of nominal spending even less stable than they would be under conventional inflation targeting. For that reason, and until someone comes up with some stats suggesting otherwise, those of us who prefer NGDP targeting (or something like it) over inflation targeting have more reason than most to hope that the Fed will resist pressure to assign additional weight to commodity price movements in making monetary policy decisions.

For evidence of this sort of peril, we need look no further than the events of 2008. Throughout that period, as I've reported elsewhere, Fed officials — and the FOMC's inflation hawks especially — were worried not about a possible recession but about inflation. By mid-2008 headline CPI inflation, which had been rising for a year, breached 6 percent. For that reason, and despite rapidly deteriorating financial markets, the Fed resisted cutting its fed funds target.

Media Name: Commodities2.png



Of course we now know that the U.S. economy had in fact been in a recession since December 2007, and that both the headline CPI inflation rate and other measures of inflation were misleading indicators of the true state of affairs. That state was eventually made clear by NGDP data showing that the NGDP growth rate had been declining for months, and (as the above chart shows) was turning negative just as headline inflation reached its peak.

Would it have helped matters had the FOMC paid more attention to commodity price movements? Hardly. In fact such movements (shown in the chart using the IMF's global commodity price index), and an increase in the price of oil in particular, were the chief force behind the high CPI inflation rate. It was, in other words, precisely because they were placing too much emphasis on commodity-price increases, instead of downplaying them, that Fed officials made the fatal mistake of failing to ease monetary policy as the U.S. economy floundered. Had Fed officials instead focused on a "core" inflation measure, like the core PCE inflation rate, which leaves out the influence of both food and energy prices, they might have avoided that mistake. And had they been able to ignore inflation altogether, either by tracking more accurate and frequent NGDP statistics, or by forecasting NGDP, they might have done better still.

In fairness to proponents of commodity-price targeting, the 2008 episode is a particularly unfortunate example of how commodity-price targeting can go wrong. On other occasions, as the chart below shows, placing more weight on commodity prices might have helped to tame the business cycle, as in the post-2003 run-up of commodity prices, which foretold a subsequently-revealed rise in the annual NGDP growth rate to a peak just shy of 7 percent. Commodity prices fell, on the other hand, during the peak years of the dot-com bubble, when annual NGDP growth also topped 6 percent, only to start rising as the bubble burst, dragging NGDP growth down with it.[4]

Media Name: Commoditites-longrun.png



The point, once again, is that while commodity prices may provide monetary authorities with useful information, to be helpful they must be used carefully and in conjunction with other indicators. Although a central bank's commitment to a strict commodity price rule might sometimes contribute to overall macroeconomic stability, at other times it could do just the opposite.

And No, I'm Not Against Monetary Rules

Although I've pointed to various disadvantages of a commodity price rule, I hasten to add that I'm far from being opposed to any sort of monetary rule. On the contrary: I'd much prefer a Fed that stuck to a predictable, systematic policy, to our present Fed with its constant policy flipping and flopping. I'd prefer it because I'm pretty sure it could simplify the challenge of economic calculation and forecasting, and thereby ultimately make us all better off. But favoring a monetary rule is one thing; favoring any old rule is another. And bad as our flip-flop Fed may be, it is not yet so bad that the wrong monetary rule couldn't possibly make it worse. 

_______________

[1] Although we never published the study in question, the editors of The Wall Street Journal somehow got wind of it. See "Review & Outlook: Free Ride for the Fed," Dec 19, 1995.

[2] See "Greenspan Takes the Gold," The Wall Street Journal, Feb. 28, 1994.

[3] A subsequent study questioned Garner's statistical procedures, but reached similar conclusions using what it's author regarded as a more appropriate procedure.

[4] The same patterns hold, by the way, if one uses the Thompson-Reuters core CRB index.

[Cross-posted from Alt-M.org]

April 8, 2019 1:13PM

Federal Aid Failure #12

In my upcoming Cato study, “Restoring Responsible Government by Cutting Federal Aid to the States,” I discuss 18 reasons why federal aid to state and local governments should be zeroed out.

Reason #12 regards how aid induces states to delay important projects such as infrastructure. Governments stall needed projects for years as they wait for federal grant approval, and then after approval federal regulations raise project costs and create further delays.

Federal disaster aid illustrates the problem. Texas is currently stalled in pursuing $4 billion of hurricane protections as it waits for federal money, according to the Wall Street Journal. But Texas has a $1.7 trillion economy. There is no reason why it cannot find $4 billion to fund its own hurricane defenses. State-funded spending would be more timely and less bureaucratic than spending funded by Washington.

Unfortunately, when it comes to subsidies for their own states, nominal conservatives such as Governor Greg Abbott and Senator John Cornyn put aside their conservative views on federalism and become federal spending proponents.

The WSJ summarizes the current battle:

Texas lawmakers are up in arms over more than $4 billion in federal aid for hurricane protection that has been delayed amid tensions between the Trump administration and Puerto Rico.

In the aftermath of the calamitous 2017 hurricane season, the federal government last April allocated a total of $16 billion to Texas, Puerto Rico, and other areas to invest in defenses against future storms. A year later, not a single dollar has been disbursed …

Political leaders in Texas, which will receive about a quarter of the total allocation, say they are being unfairly ensnared in a bureaucratic tussle. And they are increasingly worried that the delay is leaving Gulf Coast communities still recovering from Hurricane Harvey vulnerable to more destruction just as another hurricane season is set to begin June 1.

Texas Republican Sen. John Cornyn, a Trump ally on most issues, in a speech on the Senate floor last week blamed the White House budget office for the delay.

“The disregard of those who are struggling to rebuild and prepare for future storms by the bureaucrats is appalling,” Mr. Cornyn said.

His comments came after Texas Gov. Greg Abbott and state land commissioner George P. Bush, both Republicans, and members of the state’s congressional delegation have implored the Trump administration to move quicker to release the funding.

. . . Before any of the funds can be disbursed, the White House budget office must sign off on rules drafted by HUD detailing how the money can be spent. The federal housing agency said the process has been slow because it is the first time billions of aid dollars have gone toward such a prevention effort.

A HUD spokesman said the department expects to complete the rules in the next few weeks.

But even after those rules are complete, states must then devise a plan to disburse the money, which also needs federal approval. That process could potentially take an additional nine months, said Brittany Eck, a spokeswoman for Mr. Bush.

. . . The delays could slow the pace of storm-protection projects such as deepening and widening bayous, according to Alan Black, director of operations for the Harris County Flood Control District, which includes Houston.

More on federal disaster spending here.

Media Name: texas.png

April 8, 2019 8:30AM

The Pentagon’s Accounting Problem

The Pentagon’s inability to pass an audit, after years of outright stonewalling, followed by many more years of foot-dragging, is suddenly a hot topic. A few weeks ago, Rolling Stone featured a scathing exposé highlighting the Pentagon’s inability to count.

Writer Matt Taibbi explains

Ahead of misappropriation, fraud, theft, overruns, contracting corruption and other abuses that are almost certainly still going on, the Pentagon’s first problem is its books. It’s the world’s largest producer of wrong numbers, an ingenious bureaucratic defense system that hides all the other rats’ nests underneath.

These and other stories seem to have prompted House Armed Services Committee Chairman Adam Smith (D-WA) to deliver a stern message to officials in the Department of Defense. The Trump administration has requested $750 billion for the Pentagon, but, Smith noted, “We literally don’t know where a chunk of that $750 billion is going to go. We can identify some of it here and there, but by any normal accounting measure, you can’t tell us where you’re spending your money, or how much inventory you have.”

Taibbi continues:

If and when the defense review is ever completed, we’re likely to find a pile of Enrons, with the military’s losses and liabilities hidden in Enron-like special-purpose vehicles, assets systematically overvalued, monies Congress approved for X feloniously diverted to Program Y, contractors paid twice, parts bought twice, repairs done unnecessarily and at great expense, and so on.

Enron at its core was an accounting maze that systematically hid losses and overstated gains in order to keep investor money flowing in. The Pentagon is an exponentially larger financial bureaucracy whose mark is the taxpayer. Of course, less overtly a criminal scheme, the military still churns out Enron-size losses regularly, and this is only possible because its accounting is a long-tolerated fraud.

Judging from the response of DoD’s senior leaders, however, there is absolutely no cause for alarm. Acting Secretary of Defense Pat Shanahan explained last year that they “never expected to pass the audit.” When a reporter asked him why taxpayers should trust the Pentagon with their money if it can’t even “get their house in order and count ships right or buildings right,” Shanahan quipped, “We count ships right.”

Taibbi explains: “This was an inside joke. The joke was, the Pentagon isn’t so hot at counting buildings. Just a few years ago, in fact, it admitted to losing track of ‘478 structures,’ in addition to 39 Black Hawk helicopters (whose fully loaded versions list for about $21 million a pop).”

This cavalier attitude is pretty maddening. It’s almost as though the DoD sees public scrutiny as not much more than a bothersome distraction. As I explained when we discussed Taibbi’s article in War on the Rocks’ latest “Net Assessment” podcast, I had Colonel Nathan R. Jessup’s courtroom monologue from the movie A Few Good Men running through my head.

While on the stand, Jessup (played by Jack Nicholson) tries to deflect questions about his immoral and unethical actions: “I have neither the time nor the inclination to explain myself to a man who rises and sleeps under the blanket of the very freedom that I provide and then questions the manner in which I provide it. I would rather that you just said ‘thank you’ and went on your way.”

DoD leaders may not have the inclination to explain their shoddy accounting, but they do have a responsibility to the American people who pay the bills; and our elected representatives have an obligation to get to the truth. There are reasonable doubts about the Pentagon's ability to be a responsible steward of the vast sum of money shoveled its way every year. So long as these doubts persist, we shouldn't expect that Americans will want to spend even more

 

April 5, 2019 3:03PM

What the Data Say About Equal Pay Day

This week saw the passing of “Equal Pay Day,” which marks the culmination of the roughly three extra months that an average female employee had to work in 2019 to match the amount of money made by an average male worker in 2018. Many people see the pay gap as unjust, but is it really a result of rampant sexism in the workplace as the critics allege?

A survey unveiled on Tuesday by CNBC and Survey Monkey suggests that, actually, both men and women are equally pleased with their employment situations and the earnings gap can largely be explained by women being more likely on average to choose part-time work.

“Men have a Workplace Happiness Index score of 72 and women a score of 70, close enough to lack a statistically meaningful difference,” according to the newly released data. That fits with earlier polling that was conducted by Cato’s Dr. Emily Ekins, which found that in the United States, the vast majority of women “believe their own employers treat men and women equally.” Fully 86 percent of women polled believed that their employer pays women equally.

There is still a pay gap between men and women who work full-time, but that may be partly due to men and women opting to work in different fields. Dangerous jobs in fields like mining and fishing, for example, tend to attract men. Those jobs also tend to be relatively well-remunerated. (As HumanProgress.org advisory board member Mark Perry points out, the gender gap in workplace deaths far exceeds the gap in pay).

Even so, among full-time workers, the “pay gap” is rapidly narrowing. Data from the OECD shows that the gender wage gap in median earnings of full-time employees is declining in practically all countries for which there are data. In the United States, highlighted in blue in the graph below, the wage gap has fallen dramatically since the 1970s. In 1975, the U.S. gender wage gap was 38 percent. By 2015, it had shrunk to 18 percent.

Media Name: genderpaygap.png

That 18 percentage point difference does not take into account important characteristics like “age, education, years of experience, job title, employer, and location,” according to my Cato colleague Vanessa Calder. A recent study, which controlled for those characteristics, concluded that the U.S. gender pay gap is only around five percent, meaning that Equal Pay Day should actually be in January.

Of course, if any of that small remaining five percent gap is the result of sexist discrimination—rather than additional mitigating factors that the study failed to control for—then that is unacceptable. We should denounce all forms of inequitable treatment, wherever it persists. We should also take a clear-eyed view of the data and recognize the remarkable gains women have made in the workplace—and how labor market participation has transformed women’s lives for the better.

April 5, 2019 1:59PM

A Pretextual Traffic Stop Should Require Sufficient Pretext

Several years ago, Atlantic writer Conor Friedersdorf asked Twitter “If you could add one Bill of Rights style amendment to the Constitution what would it be?” I responded “The Fourth Amendment and “we mean it.”

My answer may have been tongue-in-cheek, but quite seriously, the Fourth Amendment and its protections have been eroded by the Supreme Court precedents over several decades. As a result, the power of the police to intrude upon the lives of individuals has grown and they have taken advantage of that power throughout the country.

The Fourth Amendment reads:

“The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place searched, and the persons or things to be seized.”

In plain English, the amendment should mean—among other things—that the police cannot stop (or “seize”) you on the street for no good reason. In the context of traffic stops, the Supreme Court held in Whren v. U.S. (1996) that the police had to have probable cause to believe the driver or vehicle is in violation of a traffic law. In the abstract, Whren makes perfect sense: If an officer observes a moving violation, he or she can stop a driver to address the issue.

In practice, however, Whren has provided virtual carte blanche for police to stop motorists due to innumerable traffic laws, many of which are vague and subjective, that most drivers violate every time they get behind the wheel. As I explained in my 2016 Case Western Reserve Law Review article “Thin Blue Lies,” police routinely use these myriad violations as pretext to stop motorists and investigate other crimes entirely unrelated to traffic safety. Officers understand if they follow any driver long enough, they can almost certainly find a pretext for stopping the vehicle and conducting an informal roadside investigation, subverting the spirit (if not the letter) of the Fourth Amendment’s protection against arbitrary seizure.

Despite this gaping hole in Fourth Amendment protections, police officers in Nebraska initiated a traffic stop on a vehicle without probable cause of any traffic violation whatsoever. (This isn’t hyperbole. In court filings, the State of Nebraska stipulates there was no traffic violation.) As a result of the stop, the driver of the vehicle, Mr. Colton Sievers, was questioned and eventually arrested for methamphetamine possession after a search of his vehicle. He moved to have the evidence thrown out because the original stop was an illegal seizure under the Fourth Amendment.

In a rather unusual decision, the Supreme Court of Nebraska found that the stop was legal under a different case, Illinois v. Lidster (2004), which allowed police to stop vehicles at checkpoint to seek eyewitnesses to a recent crime in the area, not to investigate drivers for criminal wrongdoing. The merits of that decision aside, neither Sievers nor the State of Nebraska argued Lidster would have permitted the stop at issue in the present case.

So unusual is the Nebraska Supreme Court decision that law professor Orin Kerr, to whom Cato scholars often find ourselves in opposition regarding Fourth Amendment jurisprudence, has joined the Sievers legal team and co-authored a cert petition to the U.S. Supreme Court (SCOTUS). The petition asks SCOTUS to either hear Sievers v. Nebraska or summarily reverse the decision below.

In a Volokh Conspiracy blogpost about the Nebraska Supreme Court decision, Kerr wrote:

It's true that Lidster allowed a suspicionless "information-seeking" checkpoint stop, which is effectively an exception to the usual rule that reasonable suspicion is required under Terry v. Ohio. [note: Terry v. Ohio (1968) preceded Whren, requiring police to have reasonable suspicion to initiate a pedestrian stop.] But the key to Lidster was that the officers were only trying to find innocent eyewitnesses to a past crime. The police set up the checkpoint at the scene of the accident hoping to find a member of the public who had seen the crime and might be able to give the police some leads. This fell out of the usual Terry requirement of suspicion, the Lidster Court held, because the police where just asking members of the general public if they could help the police.

[…]

It seems obvious that Sievers was different. This was not a case of "seeking information from the public." The officers testified that they stopped the truck because they thought it might contain evidence of crime -- specifically, stolen goods that they thought were being stored at the house where the truck had been parked. When the stop occurred, the officer who ordered the stop "advised the [other] officers to make a traffic stop to prevent the truck from leaving with any stolen items." The lead officer explained that they need to stop and search the truck "for any items taken from the [firearms] burglary."

And when Sievers was stopped, the officers didn't treat him like a member of the public who perhaps just might have seen a crime. Instead, Sievers was treated as a dangerous suspect.

Hopefully, SCOTUS agrees to hear the Sievers case or summarily reverses the Nebraska Supreme Court. SCOTUS has already ceded too much leeway to police to stop motorists as pretext, but police officers should at least meet the minimum standard for a legal stop.

You can read the whole cert petition here.

April 4, 2019 4:44PM

Brown v. Board Did Not Start Private Schooling

A common refrain in opposition to school choice is that choice is rooted in racial segregation. Specifically, that people barely thought about choice until the Supreme Court’s 1954 Brown v. Board of Education decision required public schools to desegregate, and racists scrambled to create private alternatives to which they could take public funds. I have dealt with this before and won’t rehash the whole response (hint: Roman Catholics), but a new permutation popped up on Vox yesterday, with author Adia Harvey Wingfield asserting:

Prior to Brown v. Board of Education, most US students attended local public schools. Of course, these were also strictly racially segregated. It wasn’t until the Supreme Court struck down legal segregation that a demand for private (and eventually charter and religious parochial) schools really began to grow, frequently as a backlash to integrated public institutions.

Kudos to Prof. Wingfield for making clear that many public schools were “strictly racially segregated,” which often seems to be soft pedaled when linking choice to segregation. But her assertion that private schooling didn’t "really" begin to grow until after Brown is not borne out by the data. As the chart below shows, while the share of enrollment in private schools spiked in 1959, the growth in private schooling didn’t suddenly increase right before that. In 1889—the earliest year available— the private school share was 11 percent, dipping to 7 percent in 1919, then pretty steadily rising until the 1959 peak. (Note, the earlier years of the federal data are in ten-year increments. Also, data include pre-K enrollments.)

Media Name: private_school_share.png

History is clear that private education has long been with us, and while it has certainly at times been used to avoid racial integration, it has also been employed for reasons having nothing to do with that. This remains true even in our relatively modern era in which “free” public schools have crowded out many private options.

April 4, 2019 12:10PM

Wisconsin’s Butter Grading Law Is Udderly Ridiculous

By Ilya Shapiro and Matthew Larosiere

Minerva Dairy, based in Ohio, is America’s oldest family-owned cheese and butter dairy. It has been producing artisanal, slow-churned butter in small batches since 1935. It has sustained its business through their website and by selling to regional distributers in several states. This model has worked well everywhere except Wisconsin, which requires butter manufacturers to jump through a series of cumbersome and expensive hoops to sell their product.

Of course, Wisconsin is America’s Dairyland, with many large producers who (big surprise) have an interest in limiting competition. At the behest of these companies, the state requires every batch of butter to be “graded” by a specifically state-licensed grader—all of whom live in Wisconsin, except for a half-dozen in neighboring Illinois and a handful around the country that have been licensed only in the last year—who must taste-test every single batch. Because Minerva’s butter is produced in multiple small batches over the course of each day, the law would effectively require the dairy to keep a licensed tester on-site at all times, which is cost-prohibitive. The state admits that the grading scheme has nothing to do with public health or nutrition, but claims that its grades, based largely on taste, inform consumers.

The fact that Wisconsin is trying to shape the taste of butter isn’t even the most absurd part of this story. The criteria used to grade the butter are a ludicrous mad-lib of meaningless jargon not even the state’s experts understand. The law purports to identify such flavor characteristics as “flat,” “ragged-boring,” and “utensil.” (All commonplace terms spoken by consumers in dairy aisles across the nation, no doubt.) The terminology hearkens to a freshmanic—not even sophomoric—term paper on the semiotics of postmodern agrarian literature. To claim that a grade calculated with reference to udder nonsense serves the purpose of informing anyone illustrates the danger inherent in judges’ deferring to government rationales for silly laws that burden people who are just trying to make an honest living.

Our friends at the Pacific Legal Foundation represent Minerva in a lawsuit that challenges the butter-grading law on the grounds that it burdens interstate commerce in violation of the Commerce Clause, and also hurts small dairies’ Fourteenth Amendment rights to due process and equal protection of law. Minerva lost at the district court when the judge applied a toothless, cheesy “rational basis” test to the law in question, giving little weight to the serious concerns described above, then again in the Seventh Circuit (where Cato filed an amicus brief).

Tireless in its pursuit of reasonable review of this silly law, Minerva has asked the Supreme Court to take its case. Because laws that abrogate constitutional rights warrant meaningful judicial oversight, Cato has again filed an amicus brief supporting Minerva’s petition.

Wisconsin’s law directly burdens the right to participate in the state’s butter market, and thus their economic liberty, for no sane or “rational” reason. There are simply no benefits to consumers that come from forcing producers to pay considerable sums to have an arbitrary process deposit a random letter on product packaging. It curdles the mind to argue otherwise.

The Supreme Court will decide before it breaks for the summer whether to take up Minerva Dairy v. Pfaff.