Taking Labor Markets Seriously

Perplexity over economic statistics – in particular, the decades-long trends of flat median real wages and increasing income inequality, combined with a recent disconnect between productivity growth and wage increases – is provoking serious, sober-minded people on the center-left to worry whether there might be something badly wrong with America’s economic system.

In a well-written piece (subscription required) for The New Republic, Jonathan Chait chronicles how the economic numbers are undermining confidence among Democrats in Clinton-style, pro-growth economic policies. The bottom line: what good is economic growth if it only benefits those at the very top?

Ezra Klein of The American Prospect is among the anxious. He’s written frequently on this point, but here’s a typical formulation of the perceived problem as he sees it: 

What worries me about inequality isn’t what it does, but what’s doing it, namely, a decades-long decline in worker bargaining power and the resultant redirection of productivity increases and corporate profits away from compensation and salaries. 

And here’s another:

[T]hrough mechanisms we’re not entirely sure of, the very richest are siphoning off the economic growth before it flows through the middle and lower classes.  

And here’s yet another that suggests what needs to be done: 

The right has tried to explain this accelerating inequality as an unstoppable structural feature of the new economy: It’s the meritocracy, or computers, or benefits, or global trade. Unfortunately, those explanations are largely bull****. Europe also has computers, and trade, and mobility, and benefits, and has easily avoided the widening chasm we’ve seen. So what makes us different?

In a word, power. Or the distribution of it. Europe has strong unions and active governments; countervailing powers that wrest a portion of the pie for their constituencies. We don’t.  

It’s one thing to be concerned generally about inequality: to hope that all people can participate in the blessings and opportunities that modern capitalism affords, and to look for policies that help those who are lagging. It’s quite another when that concern curdles into a belief that the capitalist system is fundamentally unfair – that workers are failing to get their fair share of the value they create because people at the top are hogging the gains from growth. It’s the difference between being an egalitarian liberal and being a collectivist. Or, in other words, between being a progressive and being a reactionary.

Here’s my question for Ezra et al.: is there something wrong with labor markets? Is there some market failure that is resulting in the systematic exploitation of workers?

I can’t imagine what that market failure would be. Labor markets are pretty vanilla, with lots of buyers (firms) and lots of sellers (workers). Local monopsony problems (e.g., the company town scenario) are unlikely to be significant in a diversified, modern economy with a highly mobile work force. I don’t know of any basis for thinking that firms’ competition for workers is less than robust. Accordingly, there are very strong reasons for thinking that wages and salaries are generally bid into line with the value of the various uses to which labor at a given skill level can be put.

As University of Chicago law professor Richard Epstein puts it:

The single most important thing to understand about the operation of a standard labour market in the world today is that it is immensely boring. It should be thought of in terms of the traditional intersection of supply and demand. It does not present any difficult transactional problems or generate negative externalities that require government control. 

In particular, there is no good reason to think that high earnings for managers and professionals at the top of the pay scale are coming at the expense of everybody else. Firms need workers at various skill levels. Exactly the same incentives guide firms when they are hiring highly skilled workers and when they are hiring less skilled workers. On the one hand, competition will cause them to bid up the price of labor to attract workers away from other job openings; on the other hand, concern with profitability will deter them from overpaying. There isn’t some pot of money in the company safe that’s dedicated to wages and salaries, so that more for some means less for others. Hiring and pay decisions are made at the margin: does adding this worker at this price improve our bottom line? For every new hire, whatever the job description or skill level, firms face strong pressures against either underpaying or overpaying.

(Note: I’m leaving aside for now the question of compensation for top executives, which raises complex issues of corporate governance. For now, it suffices to say that, even if CEOs are being overpaid, the problem affects only a tiny portion of the overall labor market.)

So I just don’t see those “mechanisms we’re not entirely sure of” that Ezra talks about. And just asserting they exist, without providing any theory or evidence of how they might work, won’t cut it as serious analysis.

But what about the decline of private-sector unions? Hasn’t that reduced workers’ bargaining power to their detriment?

Yes, it is true that, through collective bargaining, workers can obtain above-market prices for their labor – just as it is possible for price-fixing cartels to obtain above-market prices for their products. But it is also true that, over the long term, unionization has proved a disaster for affected U.S. industries. By cutting into profits, unions have deterred investment and R&D; the rigid work rules they imposed have hampered innovation and competitiveness; and the unsustainable pension and health care commitments they extracted have turned out to be financially ruinous in the long run.

A resurgence in union power wouldn’t improve the system. Union power distorted the system, ultimately with dismal consequences. Yes, some people came out ahead, but many others have suffered from the effects of underinvestment, inefficiency, and burdensome legacy costs.

Contrary to the fears of Ezra and the rest, America’s labor markets are working fine. Strong incentives are in place for companies to pay people what they’re worth. The system isn’t broken.

Of course you can be disappointed that more people aren’t doing better. In which case, you have a couple of options. Option one is to try to supplement the competitive market system. Let the system work, and accept that the prices it’s generating are offering reasonably accurate information about the economic value of different kinds of work. Then try to find policies that will (a) help people increase their value in the marketplace and (b) mitigate hardships for people with relatively low human capital.

Option two is to try to supplant the system by ignoring market signals and squelching competition. In other words, go against everything we know about how best to encourage innovation and wealth creation. Sure, a lucky minority may get windfalls, but everybody else will suffer from the reduction in economic growth.

Option one is egalitarian liberalism; option two is reactionary collectivism. As a libertarian, I am obliged to point out that perverse incentive effects and political dynamics make it very difficult for option one to work well. But option two is flat out doomed to make matters worse.

The 2006 Elections and the War in Iraq

In last Friday’s Washington Post, columnist Charles Krauthammer tried to argue that tomorrow’s mid-term elections would not deliver a historic and decisive blow to President Bush’s agenda, particularly his agenda in Iraq.

Krauthammer’s argument is based on his reading of the history of mid-year elections. He noted that the anticipated “anti-Republican wave” – a net pick up of perhaps 20-25 House seats, and 4-6 Senate seats, by the Democrats – is relatively modest by historical standards. Reagan lost more in the 6th year of his presidency; so too FDR. One of the greatest mid-term election disasters (not noted by Krauthammer) occurred in Dwight Eisenhower’s 6th year, 1958. At a time when Eisenhower was personally quite popular, the Democrats added nearly 50 members in the House, and another 16 in the Senate, building upon their already commanding majorities in both chambers. 

I’m all for studying history. But recent history paints a decidedly different picture than what Krauthammer suggests. The GOP was embarrassed by the results of the 1998 mid-term elections, a failure to capitalize on the 6th year itch that Krauthammer attributes to “Republican overreaching on the Monica Lewinsky scandal.” Given low unemployment, modest inflation, and continued strong economic growth, it is not inconceivable that the Bush administration might have been poised to avoid a 6th year setback (if so, would Harold Meyerson be lamenting “Democratic overreaching on the Mark Foley scandal”?).

Instead, the GOP is playing defense, and Iraq war advocates such as Krauthammer are scrambling to avoid blame for any of the ill-effects of their ill-conceived war. (See also the VanityFair.com article highlighting neoconservative criticisms of the Bush administration’s execution of the war).

The Iraq war is the decisive issue for the vast majority of Americans, exceeding taxes, immigration, health care, and other presumed drivers of voting behavior. Further, the war is unpopular, the costs have far exceeded the benefits, and there is no end in sight. As David Boaz and David Kirby note in a recent Cato Policy Analysis, the Iraq war was a factor – along with “Republican overspending, social intolerance, [and] civil liberties infringements” – in driving many libertarian voters away from George Bush in 2004. “If that trend continues into 2006 and 2008,” they write, “Republicans will lose elections they would otherwise win.” 

On the whole, voters are frustrated, impatient, and angry. If the GOP staves off disaster, they will do so in spite of, not because of, the disastrous war in Iraq.

And Maybe That’s Just Not a Problem…

Earlier, Michael Cannon blogged about a recent discussion between him and Harvard’s David Cutler on the health outcome effects of increasing consumers’ price sensitivity for the costs of their care. (Translation: Have consumers deal directly with some of the costs of their care, using such mechanisms as co-pays, HSAs, etc.)

Cutler worries that increasing consumers’ price sensitivity will worsen Americans’ overall health. Though heightened price sensitivity has the positive effect of reducing the use of expensive health care of dubious value, it also reduces consumer use of health care that is of value — an outcome supported by the landmark Rand Health Insurance Experiment. The undesirable result, Cutler says, is worse health outcomes.

Cannon responds that a broader use of price-sensitivity mechanisms would invoke supply-side market responses such as lower prices. The undesirable result of worse health outcomes may thus be avoided (and, perhaps, better outcomes might result).

In following this discussion, I have a question: Is worse health outcomes necessarily undesirable, especially in this circumstance?

The value of having consumers deal directly with some of the costs of their care is not simply because doing so will reduce the use of dubious health care. The real value is that it increases consumers’ appreciation of the costs and benefits of their care and allows them to decide the tradeoffs between those costs and benefits.

Suppose an extremely expensive treatment would provide a consumer with a modest, but very real, positive health outcome. Some consumers may quite rationally choose to put their money toward other uses (ranging from necessities to a “Last Holiday”). On the “health outcome” measure, that decision would be a negative one, but on the “overall welfare” measure, that would be a positive.

Under a zero-price-sensitivity health care model, consumers wouldn’t have that choice. They would have already paid for their health care through their insurance premium (or worse, elected to forgo insurance because the premium was too expensive), and so any health care benefit they could receive under their health plan would be “use it or lose it.” So why not take the expensive treatment that yields modest results? Whereas, in a sensitivity model, consumers could quite rationally elect to keep their co-payments and HSA money in some situations.

This is not to say that people are wrong to worry about worse health outcomes, or about consumers making questionable choices. But the worriers do have an intellectual IOU outstanding: Do worse health outcomes necessarily mean worse welfare, if consumers can put their health care money toward other uses?

Better health outcomes are preferable to worse outcomes ceteris paribus. But, with price sensitivity mechanisms like co-pays and HSAs, the ceteris isn’t paribus. (My apologies to Latinists).

Global Warming Costs & Benefits

A few days ago, the British government released the Stern report, a voluminous study arguing that the costs associated with stabilizing carbon dioxide concentrations at 550 parts per million were far less than the costs associated with doing nothing. Although the study acknowledged rather large bounds of uncertainty, the median estimates therein suggested that business as usual (that is, we do nothing) would mean a loss of 5–10% of global GDP every year forever. Most of those harms, however, could be avoided if we spent 1% of global GDP to cut back on greenhouse gas emissions.

There are very good reasons to suspect that Stern’s estimates regarding the cost of cutting back on greenhouse gas emissions are too low and that the damages forecast by Stern are too high. The underlying assumptions of the analysis producing Stern’s estimates have been well dissected by statistician Bjorn Lomborg, climate scientist Roger Pielke, Jr., and economist Richard Tol. But for the moment, let’s put those complaints aside.

My colleague Peter Van Doren and I have done three present value calculations assuming that business as usual (BAU) will reduce global GDP by 2%, 5%, and 10% beginning in 2056 and then in each and every year through the end of time. Don’t worry about the silliness of such a proposition. Oddly enough, once you try calculating beyond 200 years, the numbers don’t really change much given the need to discount future costs and benefits by 5%.

First, we calculated the cost of using 1% of GDP every year through the end of time to reduce greenhouse gas emissions. The net present value of that cost is $15,541 per person in the United States.

Then, we calculated the benefits for U.S. citizens (global GDP figures are pretty dodgy, so we stuck with U.S. GDP figures for the purposes of this exercise). They amount to $36,447 is you accept -10% GDP as your BAU scenario, $18,239 if you accept -5% GDP as your BAU scenario, and $7,295 if you accept -2% GDP as your BAU scenario.

In other words, Stern’s investment advice makes sense only if you think that warming will hammer GDP by 10% a year. You don’t gain much at all from emission cuts, however, if you think GDP will only drop by 5% a year if we do nothing. And if you think warming will only cost the global economy 2% of GDP every year (the “concensus” belief among economists, which comes from a widely cited analysis from Yale economist William Nordhaus), then Stern’s investment advice is shere lunacy.

And that’s not even taking into consideration the fact that reducing greenhouse gas emissions might produce no benefits at all. The latest IPCC report — as all other reports before it — acknowledge that the evidence that anthropogenic emissions are primarily driving the warming we’ve detected is strong but circumstantial. Scientists disagree about how large the chances are that we’re wasting our time cutting greenhouse gas emissions, but there’s no disagreement within the latest IPCC report that there’s a chance that anthropogenic emissions are not particularly important factors in climate at present.

Is global warming insurance a good buy? Probably not. And that’s particularly true given the fact that the relative poor (us) will pay the premium so that the relatively rich (our children and grandchildren) will get the benefits if there are any. For example, since 1950 real GDP per capita has increased by about 2% per year. Given that growth rate, U.S. GDP per capita in 100 years would be $321,684 in current dollars, or more than seven times higher than it is at present ($44,403). If global warming cuts GDP by even 10%, then GDP per capita will be $289,515 in 2106 rather than $321,684. Would anyone, let alone liberals, ever propose a 1% tax on those who make $44,000 to create benefits for those who make $289,000?

Winning, But Losing

When the government accuses someone of a criminal offense, it typically proceeds to exert enormous pressure on the accused to surrender the right to a jury trial. Fewer than 10 percent of the criminal cases in America go to trial. Plea bargaining dominates the system.

Sometimes a person will insist on a trial. This is risky because if the government gets a conviction, it will mete out extra punishment because it was forced to go through the “trouble” of a trial. But if the jury sides with the accused, the state loses, right? Wrong.  The state can still unleash punishment after an acquital. 

Hard to believe, I know. Here’s a recent ruling (United States v. Ibanga) in which the Court is at pains to explain the law.

After an eleven-day trial, a jury acquitted defendant Michael Ibanga of all of the drug distribution charges against him and one of the two money laundering charges against him in the Indictment. The single count of which defendant Ibanga was convicted typically would result in a Guidelines custody range of 51 to 63 months. However, the United States demanded that the Court sentence defendant Ibanga based on the alleged drug dealing for which he was acquitted. This increased the Guidelines custody range to 151 to 188 months, a difference of about ten years. …

What could instill more confusion and disrespect than finding out that you will be sentenced to an extra ten years in prison for the alleged crimes of which you were acquitted? The law would have gone from something venerable and respected to a farce and a sham.

From the public’s perspective, most people would be shocked to find out that even United States citizens can be (and routinely are) punished for crimes of which they were acquitted.

[…]

The Sentencing Guidelines have accomplished much good in the course of standardizing the sentencing process. Similarly, the Fourth Circuit’s post-Booker presumption approach is a politically savvy parry to the thrust of those who call for more stringent measures, such as the expansion of mandatory minimums. However, it is a charade to say that the Sixth Amendment violations inherent in the Guidelines are cured simply by intoning the word “advisory.” Saying something is so does not make it so.

One of Charles Dickens’ characters, Mr. Bumble, famously observed, “If the law supposes that, … the law is an ass– an idiot.” Charles Dickens, Oliver Twist 463 (3d ed. The New American Library 1961). He was referring, of course, to a legal fiction that had no basis in reality. Many of our fellow citizens believe that Mr. Bumble was right — that the legal process is rigged through sleights of hand that defy common sense. It would only confirm the public’s darkest suspicions to sentence a man to an extra ten years in prison for a crime that a jury found he did not commit. (Italics added.)

This case stands out because the ruling is bitterly critical of this aspect of sentencing law. Most court rulings affirm this stuff all the time, without comment. 

I should point out that the state is powerless to do anything in the typical TV drama situation where there is a single murder charge that the jury is considering against someone. If there’s a single charge and the jury says “not guilty,” the prosecutor cannot do anything about that result. But that’s TV. Nowadays, when a case goes to trial, there are multiple charges. And if the jury comes back with a single “guilty” verdict, the government might still drop a ton of bricks on the defendant — even if the jury said “not guilty” on a dozen other charges.

Does the existence of such a power influence a person’s decision with respect to whether he ought to “waive” his right to a jury trial in first place — and accept a plea bargain? What do you think?

The constitutional right to a jury trial is on life-support and that’s where the government wants it. Go here for Cato articles related to sentencing.

P4P All Over the Private Sector

At yesterday’s Cato policy forum on pay-for-performance (P4P) in Medicare, I argued the Medicare bureaucracy should stay out of P4P largely because Medicare would ruin the idea. A Medicare-administered P4P program would be less flexible than private efforts, more likely to harm patients, and the very providers that P4P aims to discipline would have way too much say in a Medicare P4P program. I recommended confining P4P to private Medicare Advantage health plans. Read my full argument here.

Harvard’s David Cutler argued that Medicare should get involved in P4P because private insurers didn’t have the purchasing power to really force providers to change. At the time, I was unaware of this study by Meredith Rosenthal and her colleagues in this week’s New England Journal of Medicine. They report:

More than half the HMOs, representing more than 80% of persons enrolled, use pay for performance in their provider contracts. Of the 126 health plans with pay-for-performance programs, nearly 90% had programs for physicians and 38% had programs for hospitals.

That probably doesn’t match Medicare’s purchasing power. But it does suggest that P4P can gain a toehold through the private sector.