Archives: 08/2006

Ivory Tower Blueprint, Take Three

Late last week, the Secretary of Education’s Commission on the Future of Higher Education released the third – and probably last – public draft of its report on reforming the American ivory tower. It will likely submit its final report to the secretary in September.

Just like the previous two drafts, number three includes a lot of bad ideas, including one sweeping proposal that all by itself justifies the report’s rejection:

The Secretary of Education, in partnership with states and other federal agencies, should develop a national strategy that would result in better and more flexible learning opportunities, especially for adult learners.

Imagine the kind of mischief policymakers could justify on the grounds that they are creating “better…learning opportunities”? No commission should ever give Washington such a broad license to legislate.

That said, there are a couple of things in draft three that differ markedly from drafts of old, including one that says something I never, ever thought I’d see in a federal report:

A private sector education lending market has fully developed (separate and distinct from loans subsidized by the federal government and made by private financial institutions), which provides a variety of competitive lending products offering many options for funding education expenses. The Commission notes that wider recognition and wider utilization of these options by many families would result in the private sector providing more funding for higher education and in freeing scarce public funds to focus on aid for economically disadvantaged students and families.

A report by a federal commission on higher education that promotes the use of private lending options? Is it April 1st?

And that’s not all. Draft three also notes much more emphatically than the previous two the deleterious, inflationary effects of having tons of third-party funding – primarily, money forced out of the wallets of Joe and Jane Taxpayer – pumped into colleges:

A significant obstacle to better cost controls is the fact that a large share of the cost of higher education is subsidized by public funds (local, state and federal) and by private contributions. These third-party payments tend to insulate what economists would call producers – colleges and universities – from the consequences of their own spending decisions, while consumers – students – also lack incentives to make decisions based on their own limited resources. Just as the U.S. healthcare finance system fuels rising costs by shielding consumers from the consequences of their own spending choices, the high level of subsidies to higher education also provides perverse spending incentives at times.

Now, let me make this clear: If the commission’s final report is essentially unchanged from draft three, it will be a bad thing, encouraging federal and state governments to impose numerous new rules and regulations on America’s ivory tower, which despite all its faults is still the best in the world. At least, though, draft three doesn’t ignore either the root causes of, or free-market solutions for, higher education’s problems.

That alone is a reason for optimism.

Medicare and Overall Health Spending

MIT’s Amy Finkelstein argues that much of the increased use of technology in American medicine (what I term “premium medicine” in Crisis of Abundance) has been induced by Medicare, which reduced out-of-pocket costs and thereby increased the demand for care.

Perhaps the easiest place to grasp her work is at an archived presentation at the AEI, particularly the powerpoint slides that may be found as a link there. Also, see links given by Tyler Cowen.

Finkelstein compares the change in insurance coverage induced by Medicare across different states–in some states the elderly were relatively well insured prior to Medicare, and in other states they were not. Using this “natural experiment” methodology, she finds that Medicare accounts for a large share of the increased spending on health care since 1965. However, she does not find any corresponding increase in health. She does, however, argue that Medicare had a very large risk-reduction benefit, by saving the very sick from having to suffer huge financial costs.

To me, this suggests trying to maximize the insurance benefits of health insurance (reducing financial risk) while minimizing its distortionary effects. The proposals in my book would head in that direction.

My proposals are politically radical but economically sensible, as the research of Finkelstein reinforces. You can hear more about Crisis of Abundance at this this Cato event on August 29th, which also will feature journalist Sebastian Mallaby and Democratic wonk Jason Furman.

How Did You Like the Cybercrime Treaty Debate?

Perhaps you weren’t aware of the Senate’s debate over the cybercrime treaty. You would be like most people. The Senate quietly approved the cybercrime treaty yesterday.

The treaty is the product of years of diligent work among governments’ law enforcement departments to increase their collaboration. It lacks a dual criminality requirement, so Americans may be investigated in the United States for things that are not crimes here. And it applies not just to “cyber” crimes but to digital evidence of any crime, so foreign governments now may begin using U.S. law enforcement to help them gather evidence in all kinds of cases.

 But you already knew that if you were following the debate. You were following the debate, weren’t you?

P4P Hubris

Dr. Rob Lamberts also comments on my paper on pay-for-performance (P4P) in Medicare. Lamberts (like Holt) seems to have blogged that paper having only read the press release. Though the paper probably would answer most of the questions they raise, I’ll respond to two of Lamberts’ comments.

1. Lamberts argues that a P4P experiment in Britain’s National Health Service (NHS) refutes my claim that “provider-focused P4P incentives can encourage inappropriate care or reduce access to care for patients with multiple illnesses or low incomes.”

Not quite. A P4P scheme can avoid those effects, but not without causing other problems. For example, the financial incentives could involve only additional payments to physicians and no payment reductions for “low-quality” care. That’s what the NHS did; physicians’ gross incomes increased by an average of $40,000.

A rewards-only approach reduces the incentive for physicians to avoid very sick or very poor patients, who make it difficult for the physician to meet the performance goals. However, that approach makes the P4P effort more costly. Lamberts himself suggests that Medicare’s P4P efforts should be budget-neutral, which would make it more likely that physicians would give outlier patients inappropriate care, avoid those patients, or otherwise game the system.

Another way the NHS experiment avoided inappropriate care or a reduction in access for outliers was by allowing physicians the discretion to disregard as many of their patients as they wished when calculating their compliance score. But the availability of such “exclusion reporting” also gave physicians an opportunity to game the system. Rather than provide the desired type of care to their patients, physicians could use exclusion reporting to increase their incomes without changing their behavior. The authors of the study cited by Lamberts note: “More research is needed to determine whether these practices are excluding patients for sound clinical reasons or in order to increase income.”

2. Lamberts writes that the Brits “were able to achieve astonishing improvements to their quality numbers and improve physician incomes at the same time.”

Of course, these two ends are not in conflict. It’s easy to get people to do what you want when you dangle $40,000 in front of them.

But we can’t even be sure that the NHS P4P experiment made any improvements in quality — much less astonishing improvments in quality. Although median reported achievement was an impressive-sounding 83.4 percent, according to the authors of that study:

There is no baseline with which to compare performance in the first year of the U.K. program, although the quality of care was already improving before its introduction.

If we don’t know what compliance rates were before the NHS introduced financial incentives for compliance, and quality was improving anyway for other reasons, how do we know whether or how much their quality numbers improved, or how much of that change was due to P4P? 

If we don’t even know that, we certainly don’t know whether the effort was worth the $3.2 billion the NHS spent in 2004.

Medicare Reform: It’s All about Control

Matthew Holt of The Health Care Blog takes a thoughtful stab at my recent paper on “pay-for-performance” and Medicare. 

Pay-for-performance is one of those hip health policy buzzwords that comes with a catchy acronym: P4P. The idea is that private insurers or the government can improve health care quality through financial rewards for providers who deliver what the payer considers “quality” care. P4P stands in contrast to “pay-for-volume,” which is how third-party payers have traditionally paid providers.

My thesis is that P4P has promise, but is very, very tricky. A bureaucracy that rewards providers for what it considers high-quality care can actually encourage low-quality care for the poor saps who happen not to be the average patient. 

There’s nothing wrong with P4P, so long as patients who are getting short-changed have the right to opt out (i.e., switch insurers). P4P’s potential is sure to be lost if the Centers for Medicare and Medicaid Services (CMS) get into the game. For example, since Medicare’s P4P scheme would be emulated by Medicare Advantage plans and other private insurers, many patients would have no escape.

Holt tries to link (reconcile?) my opposition to P4P in traditional Medicare (and support for P4P in Medicare Advantage plans) with my suggestion that Medicare should subsidize seniors with a risk-adjusted voucher rather than coverage. Let me see if this helps thread the two together:

There’s a difference between helping someone in need and making all her decisions for her. Medicare has traditionally tried to do both, offering subsidies to seniors but also dictating what their coverage looks like, payment rates, etc.  If CMS starts defining “quality” for 45 million seniors (and by extension, millions of non-seniors), the government will be making even more decisions that it’s really not qualified to make. Better that Congress just give seniors the cash and let them make their own decisions about coverage and care and quality. Markets have a funny way of helping people make those decisions.

Yes, there will still be some seniors who are ill-equipped to do that. But that small minority of seniors already needs — and gets — similar assistance. They can be taken care of without turning the rest of the health care sector into a high-cost, iffy-quality, rent-seeking cesspool. 

Not as Easy as Right and Wrong

Over at The American Prospect, Matthew Yglesias takes issue with the assertion I made yesterday that if Kansas is ever going to have peace over creationism and evolution, parents must be given the right to take their public education dollars and choose their children’s schools. Instead of forcing parents to support – and constantly fight to control – one school system, why not let them choose the institutions they want?

Yglesias argues that whether it’s parents or government that decides what children will be taught, kids will have no choice in the matter. The question to him, then, is “who is likely to teach most children the right stuff?” If it’s government, then there’s no need for choice.

That sounds reasonable enough. That is, until you consider how incredibly hard it often is to know, and to get people to agree on, what constitutes “the right stuff.” Creationists, after all, are just as sure that they are right about Darwin as evolutionists think themselves to be.

Of course, in education, Darwin is just the beginning: Is phonics-based instruction the right or wrong way to teach reading? Should American history be taught in a “traditional” way that focuses on the nation’s great achievements, or is it right to focus on the country’s flaws? What amount of time should students spend studying fine art instead of, say, physics?  Is it wrong for a student newspaper to run an article critical of the school’s principal? And so on…

Clearly, when it comes to countless disputes in education, what is truly right or truly wrong is very difficult to know. With that in mind, we must answer the question: Is it better that government impose one idea of what’s right on all children, or that parents be able to seek freely what they think is right for their own kids?

At the risk of contradicting myself, I think the latter is the obvious right answer.