Tag: data

Lies Our Professors Tell Us

On Sunday, the Washington Post ran an op-ed by the chancellor and vice chancellor of the University of California, Berkeley, in which the writers proposed that the federal government start pumping money into a select few public universities. Why? On the constantly repeated but never substantiated assertion that state and local governments have been cutting those schools off.

As I point out in the following, unpublished letter to the editor, that is what we in the business call “a lie:”

It’s unfortunate that officials of a taxpayer-funded university felt the need to deceive in order to get more taxpayer dough, but that’s what UC Berkeley’s Robert Birgeneau and Frank Yeary did. Writing about the supposedly dire financial straits of public higher education (“Rescuing Our Public Universities,” September 27), Birgeneau and Yeary lamented decades of “material and progressive disinvestment by states in higher education.” But there’s been no such disinvestment, at least over the last quarter-century. According to inflation-adjusted data from the State Higher Education Executive Officers, in 1983 state and local expenditures per public-college pupil totaled $6,478. In 2008 they hit $7,059. At the same time, public-college enrollment ballooned from under 8 million students to over 10 million. That translates into anything but a “disinvestment” in the public ivory tower, no matter what its penthouse residents may say.

Since letters to the editor typically have to be pretty short I left out readily available data for California, data which would, of course, be most relevant to the destitute scholars of Berkeley. Since I have more space here, let’s take a look: In 1983, again using inflation-adjusted SHEEO numbers, state and local governments in the Golden State provided $5,963 per full-time-equivalent student. In 2008, they furnished $7,177, a 20 percent increase. And this while enrollment grew from about 1.2 million students to 1.7 million! Of course, spending didn’t go up in a straight line – it went up and down with the business cycle – but in no way was there anything you could call appreciable ”disinvestment.” 

Unfortunately, higher education is awash in lies like these. Therefore, our debunking will not stop here! On Tuesday, October 6, at a Cato Institute/Pope Center for Higher Education Policy debate, we’ll deal with another of the ivory tower’s great truth-defying proclamations: that colleges and universities raise their prices at astronomical rates not because abundant, largely taxpayer-funded student aid makes doing so easy, but because they have to!

It’s a doozy of a declaration that should set off a doozy of a debate! To register to attend what should be a terrific event, or just to watch online, follow this link.

I hope to see you there, and remember: Don’t believe everything your professors tell you, especially when it impacts their wallets!

Debt Aggravates Spending Disease

USA Today’s Dennis Cauchon reports that ”state governments are rushing to borrow money to take advantage of cheap and plentiful credit at a time when tax collections are tumbling.” That will allow them to “avoid some painful spending cuts,” Cauchon notes, but it will sadly impose more pain on taxpayers down the road.

When politicians have the chance to act irresponsibly, they will act irresponsibly. Give them low interest rates and they go on a borrowing binge. The result is that they are in over their heads with massive piles of bond debt on top of the huge unfunded obligations they have built up for state pension and health care plans.

The chart shows that total state and local government debt soared 93 percent this decade. It jumped from $1.2 trillion in 2000 to $2.3 trillion by the second quarter of 2009, according to Federal Reserve data (Table D.3).

Government debt has soared during good times and bad. During recessions, politicians say that they need to borrow to avoid spending cuts. But during boomtimes, such as from 2003 to 2008, they say that borrowing makes sense because an expanding economy can handle a higher debt load. I’ve argued that there is little reason for allowing state and local government politicians to issue bond debt at all.

Unfortunately, the political urge to spend has resulted in the states shoving a massive pile of debt onto future taxpayers at the same time that they have built up huge unfunded obligations for worker retirement plans.

We’ve seen how uncontrolled debt issuance has encouraged spending sprees at the federal level. Sadly, it appears that the same debt-fueled spending disease has spread to the states and the cities.

Eye of Neutrality, Toe of Frog

FCC Chairman Julius GenachowskiI won’t go on at too much length about FCC Chairman Julius Genachowski’s speech at Brookings announcing his intention to codify the principle of “net neutrality” in agency rules—not because I don’t have thoughts, but because I expect it would be hard to improve on my colleague Tim Lee’s definitive paper, and because there’s actually not a whole lot of novel substance in the speech.

The digest version is that the open Internet is awesome (true!) and so the FCC is going to impose a “nondiscrimination” obligation on telecom providers—though Genachowski makes sure to stress this won’t be an obstacle to letting the copyright cops sniff through your packets for potentially “unauthorized” music, or otherwise interfere with “reasonable” network management practices.

And what exactly does that mean?

Well, they’ll do their best to flesh out the definition of “reasonable,” but in general they’ll “evaluate alleged violations…on a case-by-case basis.” Insofar as any more rigid rule would probably be obsolete before the ink dried, I guess that’s somewhat reassuring, but it absolutely reeks of the sort of ad hoc “I know it when I see it” standard that leaves telecoms wondering whether some innovative practice will bring down the Wrath of Comms only after resources have been sunk into rolling it out. Apropos of which, this is the line from the talk that really jumped out at me:

This is not about protecting the Internet against imaginary dangers. We’re seeing the breaks and cracks emerge, and they threaten to change the Internet’s fundamental architecture of openness. [….] This is about preserving and maintaining something profoundly successful and ensuring that it’s not distorted or undermined. If we wait too long to preserve a free and open Internet, it will be too late.

To which I respond: Whaaaa? What we’ve actually seen are some scattered and mostly misguided  attempts by certain ISPs to choke off certain kinds of traffic, thus far largely nipped in the bud by a combination of consumer backlash and FCC brandishing of existing powers. To the extent that packet “discrimination” involves digging into the content of user communications, it may well run up against existing privacy regulations that require explicit, affirmative user consent for such monitoring. In any event, I’m prepared to believe the situation could worsen. But pace Genachowski, it’s really pretty mysterious to me why you couldn’t start talking about the wisdom—and precise character—of some further regulatory response if and when it began to look like a free and open Internet were in serious danger.

If anything, it seems to me that the reverse is true: If you foreclose in advance the possibility of cross-subsidies between content and network providers, you probably never get to see the innovations you’ve prevented, while discriminatory routing can generally be detected, and if necessary addressed, if and when it occurs.  And the worst possible time to start throwing up barriers to a range of business models, it seems to me, is exactly when we’re finally seeing the roll-out of the next-generation wireless networks that might undermine the broadband duopoly that underpins the rationale for net neutrality in the first place. In a really competitive broadband market, after all, we can expect deviations from neutrality that benefit consumers to be adopted while those that don’t are punished by the market. I’d much rather see the FCC looking at ways to increase competition than adopt regulations that amount to resigning themselves to a broadband duopoly.

Instead of giving wireline incumbents a new regulatory stick to whack new entrants with, the FCC could focus on facilitating exploitation of “white spaces” in the broadcast spectrum or experimenting with spectral commons to enable user-owned mesh networks. The most perverse consequence I can imagine here is that you end up pushing spectrum owners to cordon off bandwidth for application-specific private networks—think data and cable TV flowing over the same wires—instead of allocating capacity to the public Internet, where they can’t prioritize their own content streams.  It just seems crazy to be taking this up now rather than waiting to see how these burgeoning markets shake out.


Public Information and Public Choice

MalamudOne of the high points of last week’s Gov 2.0 Summit was transparency champion Carl Malamud’s speech on the history of public access to government information – ending with a clarion call for  government documents, data, and deliberation to be made more freely available online. The argument is a clear slam-dunk on simple grounds of fairness and democratic accountability. If we’re going to be bound by the decisions made by regulatory agencies and courts, surely at a bare minimum we’re all entitled to know what those decisions are and how they were arrived at. But as many of the participants at the conference stressed, it’s not enough for the data to be available – it’s important that it be free, and in a machine readable form. Here’s one example of why, involving the PACER system for court records:

The fees for bulk legal data are a significant barrier to free enterprise, but an insurmountable barrier for the public interest. Scholars, nonprofit groups, journalists, students, and just plain citizens wishing to analyze the functioning of our courts are shut out. Organizations such as the ACLU and EFF and scholars at law schools have long complained that research across all court filings in the federal judiciary is impossible, because an eight cent per page charge applied to tens of millions of pages makes it prohibitive to identify systematic discrimination, privacy violations, or other structural deficiencies in our courts.

If you’re thinking in terms of individual cases – even those involving hundreds or thousands of pages of documents – eight cents per page might not sound like a very serious barrier. If you’re trying to do a meta-analysis that looks for patterns and trends across the body of cases as a whole, not only is the formal fee going to be prohibitive in the aggregate, but even free access won’t be much help unless the documents are in a format that can be easily read and processed by computers, given the much higher cost of human CPU cycles. That goes double if you want to be able to look for relationships across multiple different types of documents and data sets.

All familiar enough to transparency boosters. Is there a reason proponents of limited government ought to be especially concerned with this, beyond a general fondness for openness? Here’s one reason.  Public choice theorists often point to the problem of diffuse costs and concentrated benefits as a source of bad policy. In brief, a program that inefficiently transfers a million dollars from millions of taxpayers to a few beneficiaries will create a million dollar incentive for the beneficiaries to lobby on its behalf, while no individual taxpayer has much motivation to expend effort on recovering his tiny share of the benefit of axing the program. And political actors have similarly strong incentives to create identifiable constituencies who benefit from such programs and kick back those benefits in the form of either donations or public support. What Malamud and others point out is that one thing those concentrated beneficiaries end up doing is expending resources remaining fairly well informed about what government is doing – what regulations and expenditures are being contemplated – in order to be able to act for or against them in a timely fashion.

Now, as the costs of organizing dispersed people get lower thanks to new technologies, we’re seeing increasing opportunities to form ad hoc coalitions supporting and opposing policy changes with more dispersed costs and benefits – which is good, and works to erode the asymmetry that generates a lot of bad policy. But incumbent constituencies have the advantage of already being organized and able to invest resources in identifying policy changes that implicate their interests. If ten complex regulations are under consideration, and one creates a large benefit to an incumbent constituent while imposing smaller costs on a much larger group of people, it’s a great advantage if the incumbent is aware of the range of options in advance, and can push for their favored option, while the dispersed losers only become cognizant of it when the papers report on the passage of a specific rule and slowly begin teasing out its implications.

Put somewhat more briefly: Technology that lowers organizing costs can radically upset a truly pernicious public choice dynamic, but only if the information necessary to catalyze the formation of a blocking coalition is out there in a form that allows it to be sifted and analyzed by crowdsourced methods first. Transparency matters less when organizing costs are high, because the fight is ultimately going to be decided by a punch up between large, concentrated interest groups for whom the cost of hiring experts to learn about and analyze the implications of potential policy changes is relatively trivial. As transaction costs fall, and there’s potential for spontaneous, self-identifying coalitions to form, those information costs loom much larger. The timely availability – and aggregability – of information about the process of policy formation and its likely consequences then suddenly becomes a key determinant of the power of incumbent constituencies to control policy and extract rents.

Early Education: Lots of Noise, Little to Hear

This weekend, the Detroit News ran a letter to the editor taking issue with a piece I wrote about the Student Aid and Fiscal Responsbility Act (SAFRA). Strangley, though the main part of SAFRA deals with higher education loans; the bill contains new spending all over the education map; and I made no specific mention of early-childhood education in my piece (though there is an early-ed component in the bill); the letter is all about pre-K education.

That the pre-K pushers even saw my op-ed as something to write about illustrates how very agressive they are. Unfortunately, the letter also demonstrates how dubious is the message that they are so loudly and energetically proclaiming. Here’s a telling bit:

Economists, business leaders and scientists all know from cold, hard data that high-quality early education provides a significant return on investment in terms of education, social and health outcomes.

Whether pre-K education is worth even a dime all depends on how you define “high quality.” As Adam Schaeffer lays out in his new early-education policy analysis — and Andrew Coulson reiterates in an exchange with economist James Heckman — the “cold, hard data” say only that a few programs seem to work, and most don’t. Pronouncements about the huge returns on pre-K investment are almost always based on very small, hyper-intensive programs that would be all but impossible to replicate on a large scale. And the programs that do function on a large scale? As Adam lays out, they provide little to no return on investment.

The early-education crowd is very good at getting out its message. Too bad the message itself is so darn suspect.

Obama to Seek Cap on Federal Pay Raises

USA Today reports that President Obama is seeking a cap on federal pay raises:

President Obama urged Congress Monday to limit cost-of-living pay raises to 2% for 1.3 million federal employees in 2010, extending an income squeeze that has hit private workers and threatens Social Security recipients and even 401(k) investors.

…The president’s action comes when consumer prices have fallen 2.1% in the 12 months ending in July, because of a massive drop in energy prices. The recession has taken an even tougher toll on private-sector wages, which rose only 1.5% for the year ended in June — the lowest increase since the government started keeping track in 1980. Private-sector workers also have been subject to widespread layoffs and furloughs.

Last week, economist Chris Edwards discussed data from the Bureau of Economic research that revealed the large gap between the average pay of federal employees and private workers. His call to freeze federal pay “for a year or two” received attention and criticism, (FedSmith, GovExec, Federal Times, Matt Yglesias, Conor Clarke) to which he has responded.

As explained on CNN earlier this year, the pay gap between federal and private workers has been widening for some time now:

Author of the Private School Spending Study Responds

Bruce Baker, author of the study of private school spending about which I blogged yesterday, has responded to my critique. Dr. Baker thinks I should “learn to read.”

He takes special exception to my statement that he “makes no serious attempt to determine the extent of the bias [in his chosen sample of private schools], or to control for it.” Baker then points to the following one paragraph discussion in his 51 page paper that deals with sample bias, which I reproduce here in full [the corresponding table appears on a later page]:

The representativeness of the sample analyzed here can be roughly considered by comparing the pupil-teacher ratios to known national averages. For CAS and independent schools, the pupil-teacher ratio is similar between sample and national (see Figure 21, later in this report). Hebrew/Jewish day schools for which financial data were available had somewhat smaller ratios (suggesting smaller class sizes) than all Hebrew/Jewish day schools, indicating that the mean estimated expenditures for this group might be high. The differential, in the same direction, was even larger for the small group of Catholic schools for which financial data were available. For Montessori schools, however, ratios in the schools for which financial data were available were higher than for the group as a whole, suggesting that estimated mean expenditures might be low.

Even with my admittedly imperfect reading ability, I was able to navigate this paragraph. I did not consider it a serious attempt at dealing with the sample’s selection bias. I still don’t. In fact, it entirely misses the main source of bias. That bias does not stem chiefly from class size differences, it stems from the fact that religious schools need not file spending data with the IRS, and that the relatively few that do file IRS Form 990 (0.5% of Catholic schools!) have a very good reason for doing so: they’re trying harder to raise money from donors.  This is not just my own analysis, but also the analysis of a knowledgeable source within Guidestar (the organization from which Baker obtained the data), whose name and contact information I will share with Dr. Baker off-line if he would like to follow-up.

Obviously, schools that are trying harder to raise non-tuition revenue are likely to… raise more non-tuition revenue. That is the 800 pound flaming pink chihuahua in the middle of this dataset. According to the NCES, 80 percent of private school students are enrolled in religious schools (see p. 7), and this sample is extremely likely to suffer upward bias on spending by that overwhelming majority of private schools. They may spend the extra money on facilities, salaries, equipment, field trips, materials, or any number of other things apart from, or in addition to, smaller classes.

Baker’s study does not address this source of bias, and so can tell us nothing reliable about religious schools, or private schools in general, either nationally or in the regions it identifies. The only thing that the study tells us with any degree of confidence is that elite independent private schools, which make up a small share of the private education marketplace, are expensive. An uncontroversial finding.

It is surprising to me that this seemingly obvious point was also missed by several other scholars whose names appear in the frontmatter of the paper. This is yet another reminder to journalists: when you get a new and interesting paper, send it to a few other experts for comment (embargoed if you like) before writing it up. Doing so will usually lead to a much more interesting, and accurate, story.