Topic: Health Care & Welfare

And Maybe That’s Just Not a Problem…

Earlier, Michael Cannon blogged about a recent discussion between him and Harvard’s David Cutler on the health outcome effects of increasing consumers’ price sensitivity for the costs of their care. (Translation: Have consumers deal directly with some of the costs of their care, using such mechanisms as co-pays, HSAs, etc.)

Cutler worries that increasing consumers’ price sensitivity will worsen Americans’ overall health. Though heightened price sensitivity has the positive effect of reducing the use of expensive health care of dubious value, it also reduces consumer use of health care that is of value — an outcome supported by the landmark Rand Health Insurance Experiment. The undesirable result, Cutler says, is worse health outcomes.

Cannon responds that a broader use of price-sensitivity mechanisms would invoke supply-side market responses such as lower prices. The undesirable result of worse health outcomes may thus be avoided (and, perhaps, better outcomes might result).

In following this discussion, I have a question: Is worse health outcomes necessarily undesirable, especially in this circumstance?

The value of having consumers deal directly with some of the costs of their care is not simply because doing so will reduce the use of dubious health care. The real value is that it increases consumers’ appreciation of the costs and benefits of their care and allows them to decide the tradeoffs between those costs and benefits.

Suppose an extremely expensive treatment would provide a consumer with a modest, but very real, positive health outcome. Some consumers may quite rationally choose to put their money toward other uses (ranging from necessities to a “Last Holiday”). On the “health outcome” measure, that decision would be a negative one, but on the “overall welfare” measure, that would be a positive.

Under a zero-price-sensitivity health care model, consumers wouldn’t have that choice. They would have already paid for their health care through their insurance premium (or worse, elected to forgo insurance because the premium was too expensive), and so any health care benefit they could receive under their health plan would be “use it or lose it.” So why not take the expensive treatment that yields modest results? Whereas, in a sensitivity model, consumers could quite rationally elect to keep their co-payments and HSA money in some situations.

This is not to say that people are wrong to worry about worse health outcomes, or about consumers making questionable choices. But the worriers do have an intellectual IOU outstanding: Do worse health outcomes necessarily mean worse welfare, if consumers can put their health care money toward other uses?

Better health outcomes are preferable to worse outcomes ceteris paribus. But, with price sensitivity mechanisms like co-pays and HSAs, the ceteris isn’t paribus. (My apologies to Latinists).

P4P All Over the Private Sector

At yesterday’s Cato policy forum on pay-for-performance (P4P) in Medicare, I argued the Medicare bureaucracy should stay out of P4P largely because Medicare would ruin the idea. A Medicare-administered P4P program would be less flexible than private efforts, more likely to harm patients, and the very providers that P4P aims to discipline would have way too much say in a Medicare P4P program. I recommended confining P4P to private Medicare Advantage health plans. Read my full argument here.

Harvard’s David Cutler argued that Medicare should get involved in P4P because private insurers didn’t have the purchasing power to really force providers to change. At the time, I was unaware of this study by Meredith Rosenthal and her colleagues in this week’s New England Journal of Medicine. They report:

More than half the HMOs, representing more than 80% of persons enrolled, use pay for performance in their provider contracts. Of the 126 health plans with pay-for-performance programs, nearly 90% had programs for physicians and 38% had programs for hospitals.

That probably doesn’t match Medicare’s purchasing power. But it does suggest that P4P can gain a toehold through the private sector.

When Patients Change, Do Providers Change Too?

Harvard’s David Cutler visited Cato yesterday to participate in a small group discussion about cost-effectiveness in medicine, and also in a panel on improving quality in Medicare. (You can watch the latter event here in a couple of days.) My colleague Arnold Kling blogs about issues discussed at both events. 

I am struck by one issue that emerged, which has to do with price-sensitivity, provider behavior, and health outcomes. Cutler argued that when patients are more price-sensitive (i.e., when they have to pay for more of the cost of their medical care), they tend to cut back both on care that would have done nothing for them, and on care that would have helped them. He postulates that if we were to move all Americans into health savings accounts (HSAs), thereby making patients more price-sensitive, we would see worse health outcomes than we see now. 

I am skeptical of that prediction. I think that if the move to HSAs were confined to a small, randomly selected subset of the population (call it “Rand II”), Cutler’s prediction would be more plausible — though by no means certain. There is precious little evidence that suggests — and it does no more than suggest — that for some patients, greater price-sensitivity leads to worse health outcomes. 

However, even if we assume that Rand II would show that greater price-sensitivity leads to worse health outcomes, it does not follow that we would get the same result were the entire population made more price-sensitive. The reason is that with a population-wide shift, the supply side of health care markets would respond to the enormous change on the demand side. Faced with patients who are less eager to consume medical care, providers would have to do a lot more to sell their services, including:

  • conducting research on the usefulness of their services,
  • improving the quality of their services,
  • lowering their prices, and
  • educating patients about the value of their services.

These responses should enable patients to make smarter decisions about what to consume and what to avoid. Instead of having patients cut back equally on beneficial and useless care, they would cut back on useless care more, having more help discerning between the two. Downward pressure on prices should make cutting back on beneficial care even less frequent.

MIT economist Amy Finkelstein demonstrates that the supply side of medical care does respond to demand-side changes. For 30 years, economists believed that the expansion of health insurance (which reduced price-sensitivity) had a relatively small impact on the growth of health spending. That belief was based on the effects of a demand-side study (Rand I), which was too small to induce or measure any supply-side responses to the change in price sensitivity. Using a data set that does capture and allow her to measure supply-side responses, Finkelstein estimates that the effect that the expansion of health insurance had on health spending is six times greater than the demand-side-only experiment Rand I suggests. 

Casual observation suggests that supply-side responses are helping price-sensitive patients make better choices right now. At the same time that HSAs and other insurance options are making millions of patients more price-sensitive, insurers and entrepreneurs are furnishing more of the price and quality information that patients need.

It would be foolish to claim that the supply-side response to price-sensitive consumers would be so great that patients would have perfect information and would never make mistakes. Yet most opponents of making patients more price-sensitive make the equally foolish assumption that there would be no supply-side response to the new incentives coming from the demand side. I say “most” because Cutler and others are not in this group. If I understood Cutler, he acknowledges that there will be such supply-side responses, and that we have no way of knowing whether or how much they would improve health outcomes.

True enough. But it’s something like 50 percent of the debate over HSAs and health outcomes. T’would be nice to have opponents of HSAs and the like acknowledge and engage it.

The Costs of War

With 103 American fatalities, October was the fourth-bloodiest month since the beginning of the Iraq War. But the focus on the number of battle deaths may understate the true costs of the war for the American soldier. Due to innovations in battlefield medicine, we’re getting much better at saving soldiers’ lives. In WWII, 30 percent of those injured in combat died. In Vietnam–and even in the Gulf War–it was 24 percent. Now it’s around 10 percent. That is unquestionably a positive development. But it also means that many of those we save are horribly maimed. As this article from the New England Journal of Medicine describes:

One airman with devastating injuries from a mortar attack outside Balad on September 11, 2004, was on an operating table at Walter Reed just 36 hours later. In extremis from bilateral thigh injuries, abdominal wounds, shrapnel in the right hand, and facial injuries, he was taken from the field to the nearby 31st CSH in Balad. Bleeding was controlled, volume resuscitation begun, a guillotine amputation at the thigh performed. He underwent a laparotomy with diverting colostomy. His abdomen was left open, with a clear plastic bag as covering. He was then taken to Landstuhl by an Air Force Critical Care Transport team. When he arrived in Germany, Army surgeons determined that he would require more than 30 days’ recovery, if he made it at all. Therefore, although resuscitation was continued and a further washout performed, he was sent on to Walter Reed. There, after weeks in intensive care and multiple operations, he did survive. This is itself remarkable. Injuries like his were unsurvivable in previous wars. The cost, however, can be high. The airman lost one leg above the knee, the other in a hip disarticulation, his right hand, and part of his face. How he and others like him will be able to live and function remains an open question….

[F]or many new problems, the answers remain unclear. Early in the war, for example, Kevlar vests proved dramatically effective in preventing torso injuries. Surgeons, however, now find that IEDs are causing blast injuries that extend upward under the armor and inward through axillary vents. Blast injuries are also producing an unprecedented burden of what orthopedists term “mangled extremities” — limbs with severe soft-tissue, bone, and often vascular injuries. These can be devastating, potentially mortal injuries, and whether to amputate is one of the most difficult decisions in orthopedic surgery. Military surgeons have relied on civilian trauma criteria to guide their choices, but those criteria have not proved reliable in this war. Possibly because the limb injuries are more extreme or more often combined with injuries to other organs, attempts to salvage limbs following the criteria have frequently failed, with life-threatening blood loss, ischemia, and sepsis.

Even with all the efforts made to save limbs, “the amputation rate in Iraq is double that of previous wars,” as the LA Times reported earlier this year, in its three-part series on wounded American soldiers. 

That war is a bloody business is hardly a novel point.  And, of course, it is not by itself an argument against any particular war. If these men incurred similar injuries charging Al Qaeda positions at Tora Bora, that would have been terrible, but far easier to justify.  However, it is becoming increasingly hard to justify the costs of our open-ended commitment in Iraq, where our mission becomes ever murkier, and victory, however defined, continues to recede over the horizon.