Tag: data

Debt Aggravates Spending Disease

USA Today’s Dennis Cauchon reports that ”state governments are rushing to borrow money to take advantage of cheap and plentiful credit at a time when tax collections are tumbling.” That will allow them to “avoid some painful spending cuts,” Cauchon notes, but it will sadly impose more pain on taxpayers down the road.

When politicians have the chance to act irresponsibly, they will act irresponsibly. Give them low interest rates and they go on a borrowing binge. The result is that they are in over their heads with massive piles of bond debt on top of the huge unfunded obligations they have built up for state pension and health care plans.

The chart shows that total state and local government debt soared 93 percent this decade. It jumped from $1.2 trillion in 2000 to $2.3 trillion by the second quarter of 2009, according to Federal Reserve data (Table D.3).

Government debt has soared during good times and bad. During recessions, politicians say that they need to borrow to avoid spending cuts. But during boomtimes, such as from 2003 to 2008, they say that borrowing makes sense because an expanding economy can handle a higher debt load. I’ve argued that there is little reason for allowing state and local government politicians to issue bond debt at all.

Unfortunately, the political urge to spend has resulted in the states shoving a massive pile of debt onto future taxpayers at the same time that they have built up huge unfunded obligations for worker retirement plans.

We’ve seen how uncontrolled debt issuance has encouraged spending sprees at the federal level. Sadly, it appears that the same debt-fueled spending disease has spread to the states and the cities.

Eye of Neutrality, Toe of Frog

FCC Chairman Julius GenachowskiI won’t go on at too much length about FCC Chairman Julius Genachowski’s speech at Brookings announcing his intention to codify the principle of “net neutrality” in agency rules—not because I don’t have thoughts, but because I expect it would be hard to improve on my colleague Tim Lee’s definitive paper, and because there’s actually not a whole lot of novel substance in the speech.

The digest version is that the open Internet is awesome (true!) and so the FCC is going to impose a “nondiscrimination” obligation on telecom providers—though Genachowski makes sure to stress this won’t be an obstacle to letting the copyright cops sniff through your packets for potentially “unauthorized” music, or otherwise interfere with “reasonable” network management practices.

And what exactly does that mean?

Well, they’ll do their best to flesh out the definition of “reasonable,” but in general they’ll “evaluate alleged violations…on a case-by-case basis.” Insofar as any more rigid rule would probably be obsolete before the ink dried, I guess that’s somewhat reassuring, but it absolutely reeks of the sort of ad hoc “I know it when I see it” standard that leaves telecoms wondering whether some innovative practice will bring down the Wrath of Comms only after resources have been sunk into rolling it out. Apropos of which, this is the line from the talk that really jumped out at me:

This is not about protecting the Internet against imaginary dangers. We’re seeing the breaks and cracks emerge, and they threaten to change the Internet’s fundamental architecture of openness. [….] This is about preserving and maintaining something profoundly successful and ensuring that it’s not distorted or undermined. If we wait too long to preserve a free and open Internet, it will be too late.

To which I respond: Whaaaa? What we’ve actually seen are some scattered and mostly misguided  attempts by certain ISPs to choke off certain kinds of traffic, thus far largely nipped in the bud by a combination of consumer backlash and FCC brandishing of existing powers. To the extent that packet “discrimination” involves digging into the content of user communications, it may well run up against existing privacy regulations that require explicit, affirmative user consent for such monitoring. In any event, I’m prepared to believe the situation could worsen. But pace Genachowski, it’s really pretty mysterious to me why you couldn’t start talking about the wisdom—and precise character—of some further regulatory response if and when it began to look like a free and open Internet were in serious danger.

If anything, it seems to me that the reverse is true: If you foreclose in advance the possibility of cross-subsidies between content and network providers, you probably never get to see the innovations you’ve prevented, while discriminatory routing can generally be detected, and if necessary addressed, if and when it occurs.  And the worst possible time to start throwing up barriers to a range of business models, it seems to me, is exactly when we’re finally seeing the roll-out of the next-generation wireless networks that might undermine the broadband duopoly that underpins the rationale for net neutrality in the first place. In a really competitive broadband market, after all, we can expect deviations from neutrality that benefit consumers to be adopted while those that don’t are punished by the market. I’d much rather see the FCC looking at ways to increase competition than adopt regulations that amount to resigning themselves to a broadband duopoly.

Instead of giving wireline incumbents a new regulatory stick to whack new entrants with, the FCC could focus on facilitating exploitation of “white spaces” in the broadcast spectrum or experimenting with spectral commons to enable user-owned mesh networks. The most perverse consequence I can imagine here is that you end up pushing spectrum owners to cordon off bandwidth for application-specific private networks—think data and cable TV flowing over the same wires—instead of allocating capacity to the public Internet, where they can’t prioritize their own content streams.  It just seems crazy to be taking this up now rather than waiting to see how these burgeoning markets shake out.

Topics:

Public Information and Public Choice

MalamudOne of the high points of last week’s Gov 2.0 Summit was transparency champion Carl Malamud’s speech on the history of public access to government information – ending with a clarion call for  government documents, data, and deliberation to be made more freely available online. The argument is a clear slam-dunk on simple grounds of fairness and democratic accountability. If we’re going to be bound by the decisions made by regulatory agencies and courts, surely at a bare minimum we’re all entitled to know what those decisions are and how they were arrived at. But as many of the participants at the conference stressed, it’s not enough for the data to be available – it’s important that it be free, and in a machine readable form. Here’s one example of why, involving the PACER system for court records:

The fees for bulk legal data are a significant barrier to free enterprise, but an insurmountable barrier for the public interest. Scholars, nonprofit groups, journalists, students, and just plain citizens wishing to analyze the functioning of our courts are shut out. Organizations such as the ACLU and EFF and scholars at law schools have long complained that research across all court filings in the federal judiciary is impossible, because an eight cent per page charge applied to tens of millions of pages makes it prohibitive to identify systematic discrimination, privacy violations, or other structural deficiencies in our courts.

If you’re thinking in terms of individual cases – even those involving hundreds or thousands of pages of documents – eight cents per page might not sound like a very serious barrier. If you’re trying to do a meta-analysis that looks for patterns and trends across the body of cases as a whole, not only is the formal fee going to be prohibitive in the aggregate, but even free access won’t be much help unless the documents are in a format that can be easily read and processed by computers, given the much higher cost of human CPU cycles. That goes double if you want to be able to look for relationships across multiple different types of documents and data sets.

All familiar enough to transparency boosters. Is there a reason proponents of limited government ought to be especially concerned with this, beyond a general fondness for openness? Here’s one reason.  Public choice theorists often point to the problem of diffuse costs and concentrated benefits as a source of bad policy. In brief, a program that inefficiently transfers a million dollars from millions of taxpayers to a few beneficiaries will create a million dollar incentive for the beneficiaries to lobby on its behalf, while no individual taxpayer has much motivation to expend effort on recovering his tiny share of the benefit of axing the program. And political actors have similarly strong incentives to create identifiable constituencies who benefit from such programs and kick back those benefits in the form of either donations or public support. What Malamud and others point out is that one thing those concentrated beneficiaries end up doing is expending resources remaining fairly well informed about what government is doing – what regulations and expenditures are being contemplated – in order to be able to act for or against them in a timely fashion.

Now, as the costs of organizing dispersed people get lower thanks to new technologies, we’re seeing increasing opportunities to form ad hoc coalitions supporting and opposing policy changes with more dispersed costs and benefits – which is good, and works to erode the asymmetry that generates a lot of bad policy. But incumbent constituencies have the advantage of already being organized and able to invest resources in identifying policy changes that implicate their interests. If ten complex regulations are under consideration, and one creates a large benefit to an incumbent constituent while imposing smaller costs on a much larger group of people, it’s a great advantage if the incumbent is aware of the range of options in advance, and can push for their favored option, while the dispersed losers only become cognizant of it when the papers report on the passage of a specific rule and slowly begin teasing out its implications.

Put somewhat more briefly: Technology that lowers organizing costs can radically upset a truly pernicious public choice dynamic, but only if the information necessary to catalyze the formation of a blocking coalition is out there in a form that allows it to be sifted and analyzed by crowdsourced methods first. Transparency matters less when organizing costs are high, because the fight is ultimately going to be decided by a punch up between large, concentrated interest groups for whom the cost of hiring experts to learn about and analyze the implications of potential policy changes is relatively trivial. As transaction costs fall, and there’s potential for spontaneous, self-identifying coalitions to form, those information costs loom much larger. The timely availability – and aggregability – of information about the process of policy formation and its likely consequences then suddenly becomes a key determinant of the power of incumbent constituencies to control policy and extract rents.

Early Education: Lots of Noise, Little to Hear

This weekend, the Detroit News ran a letter to the editor taking issue with a piece I wrote about the Student Aid and Fiscal Responsbility Act (SAFRA). Strangley, though the main part of SAFRA deals with higher education loans; the bill contains new spending all over the education map; and I made no specific mention of early-childhood education in my piece (though there is an early-ed component in the bill); the letter is all about pre-K education.

That the pre-K pushers even saw my op-ed as something to write about illustrates how very agressive they are. Unfortunately, the letter also demonstrates how dubious is the message that they are so loudly and energetically proclaiming. Here’s a telling bit:

Economists, business leaders and scientists all know from cold, hard data that high-quality early education provides a significant return on investment in terms of education, social and health outcomes.

Whether pre-K education is worth even a dime all depends on how you define “high quality.” As Adam Schaeffer lays out in his new early-education policy analysis — and Andrew Coulson reiterates in an exchange with economist James Heckman — the “cold, hard data” say only that a few programs seem to work, and most don’t. Pronouncements about the huge returns on pre-K investment are almost always based on very small, hyper-intensive programs that would be all but impossible to replicate on a large scale. And the programs that do function on a large scale? As Adam lays out, they provide little to no return on investment.

The early-education crowd is very good at getting out its message. Too bad the message itself is so darn suspect.

Obama to Seek Cap on Federal Pay Raises

USA Today reports that President Obama is seeking a cap on federal pay raises:

President Obama urged Congress Monday to limit cost-of-living pay raises to 2% for 1.3 million federal employees in 2010, extending an income squeeze that has hit private workers and threatens Social Security recipients and even 401(k) investors.

…The president’s action comes when consumer prices have fallen 2.1% in the 12 months ending in July, because of a massive drop in energy prices. The recession has taken an even tougher toll on private-sector wages, which rose only 1.5% for the year ended in June — the lowest increase since the government started keeping track in 1980. Private-sector workers also have been subject to widespread layoffs and furloughs.

Last week, economist Chris Edwards discussed data from the Bureau of Economic research that revealed the large gap between the average pay of federal employees and private workers. His call to freeze federal pay “for a year or two” received attention and criticism, (FedSmith, GovExec, Federal Times, Matt Yglesias, Conor Clarke) to which he has responded.

As explained on CNN earlier this year, the pay gap between federal and private workers has been widening for some time now:

Author of the Private School Spending Study Responds

Bruce Baker, author of the study of private school spending about which I blogged yesterday, has responded to my critique. Dr. Baker thinks I should “learn to read.”

He takes special exception to my statement that he “makes no serious attempt to determine the extent of the bias [in his chosen sample of private schools], or to control for it.” Baker then points to the following one paragraph discussion in his 51 page paper that deals with sample bias, which I reproduce here in full [the corresponding table appears on a later page]:

The representativeness of the sample analyzed here can be roughly considered by comparing the pupil-teacher ratios to known national averages. For CAS and independent schools, the pupil-teacher ratio is similar between sample and national (see Figure 21, later in this report). Hebrew/Jewish day schools for which financial data were available had somewhat smaller ratios (suggesting smaller class sizes) than all Hebrew/Jewish day schools, indicating that the mean estimated expenditures for this group might be high. The differential, in the same direction, was even larger for the small group of Catholic schools for which financial data were available. For Montessori schools, however, ratios in the schools for which financial data were available were higher than for the group as a whole, suggesting that estimated mean expenditures might be low.

Even with my admittedly imperfect reading ability, I was able to navigate this paragraph. I did not consider it a serious attempt at dealing with the sample’s selection bias. I still don’t. In fact, it entirely misses the main source of bias. That bias does not stem chiefly from class size differences, it stems from the fact that religious schools need not file spending data with the IRS, and that the relatively few that do file IRS Form 990 (0.5% of Catholic schools!) have a very good reason for doing so: they’re trying harder to raise money from donors.  This is not just my own analysis, but also the analysis of a knowledgeable source within Guidestar (the organization from which Baker obtained the data), whose name and contact information I will share with Dr. Baker off-line if he would like to follow-up.

Obviously, schools that are trying harder to raise non-tuition revenue are likely to… raise more non-tuition revenue. That is the 800 pound flaming pink chihuahua in the middle of this dataset. According to the NCES, 80 percent of private school students are enrolled in religious schools (see p. 7), and this sample is extremely likely to suffer upward bias on spending by that overwhelming majority of private schools. They may spend the extra money on facilities, salaries, equipment, field trips, materials, or any number of other things apart from, or in addition to, smaller classes.

Baker’s study does not address this source of bias, and so can tell us nothing reliable about religious schools, or private schools in general, either nationally or in the regions it identifies. The only thing that the study tells us with any degree of confidence is that elite independent private schools, which make up a small share of the private education marketplace, are expensive. An uncontroversial finding.

It is surprising to me that this seemingly obvious point was also missed by several other scholars whose names appear in the frontmatter of the paper. This is yet another reminder to journalists: when you get a new and interesting paper, send it to a few other experts for comment (embargoed if you like) before writing it up. Doing so will usually lead to a much more interesting, and accurate, story.

Evidence-based for Thee, But Not for Me

One of the things that strikes me as curious about supporters of the No Child Left Behind Act is that they talk regularly about “evidence” and having everything be “research-based,” yet they often ignore or distort evidence in order to portray NCLB as a success. Case in point, an op-ed in today’s New York Times by the Brookings Institution’s Tom Loveless and the Fordham Foundation’s Michael Petrilli.

Truth be told, the piece doesn’t lionize NCLB, criticizing the law for encouraging schools to neglect high-performing students because its primary goal is to improve the performance of low achievers. Fair enough. The problem is, Loveless and Petrilli assert with great confidence that the law is definitely doing the job it was intended to do. “It is clear,” they write, “that No Child Left Behind is helping low-achieving students.”

As you shall see in a moment, that is an utterly unsustainable assertion according to the best available evidence we have: results from the National Assessment of Educational Progress, which carries no consequences for schools or states and, hence, is subject to very little gaming. Ironically, Loveless and Petrilli make their indefensible pronouncement while criticizing a study for failing to use NAEP in reaching its own conclusions about NCLB.

So what’s wrong with stating that NCLB is clearly helping low-achieving students? Let me count the ways (as I have done before):

  1. Numerous reforms, ranging from class-size reduction, to school choice, to new nutritional standards, have been occurring at the same time as NCLB. It is impossible to isolate which achievement changes are attributable to NCLB, and which to myriad other reforms
  2. As you will see in a moment, few NAEP score intervals start cleanly at the beginning of NCLB – which is itself a difficult thing to pinpoint – making it impossible to definitively attribute trends to the law
  3. When we look at gains on NAEP in many periods before NCLB, they were greater on a per-year basis than during NCLB. That means other things going on in education before NCLB were working just as well or better than things since the law’s enactment.

So let’s go to the scores. Below I have reproduced score trends for both the long-term and regular NAEP mathematics and reading exams. (The former is supposed to be an unchanging test and the latter subject to revision, though in practice both have been pretty consistent measures.) I have posted the per-year score increase or decreases above the segments that include NCLB (but that might also include years without NCLB). I have also posted score increases in pre-NCLB segments that saw greater improvements than segments including NCLB. (Note that on 8th-grade reading I didn’t highlight pre-NCLB segments with smaller score decreases than seen under NCLB. I didn’t want to celebrate backward movement in any era.)

For context, NCLB was signed into law in January 2002 but it took at least a year to get all the regulations written and more than that for the law to be fully implemented. As a result, I’ll leave it to the reader to decide whether 2002, 2003, or even 2004 should be the law’s starting point, noting only that this problem alone makes it impossible to say that NCLB clearly caused anything. In addition, notice that some of the biggest gains under NCLB are in periods that also include many non-NCLB years, making it impossible to confidently attribute those gains to NCLB.

Please note that I calculated per-year changes based on having data collected in the same way from start to end. So some lines are dashed and others solid (denoting changes in how some students were counted); I calculated changes based on start and end points for the type of line used for the period. I also rounded to one decimal point to save space. Finally, I apologize if this is hard to read—I’m no computer graphics wizard—and would direct you to NAEP’s website to check out the data for yourself.

4th Grade Regular Math

8th Grade Regular Math

4th Grade Regular Reading

8th Grade Regular Reading

Age 9 Long-term Math

Age 13 Long-term Math

Age 17 Long-term Math

Age 9 Long-term Reading

Age 13 Long-term Reading

Age 17 Long-term Reading

So what does the data show us? First, that there were numerous periods that didn’t include NCLB that saw greater or equal growth for low-achieving students as periods with NCLB. That means much of what we were doing before NCLB was apparently more effective than what we’ve been doing under NCLB, though it is impossible to tell from the data what any of those things are. In addition, it is notable that those periods with the greatest gains that include NCLB are typically the ones that also include non-NCLB years, such as 2000 to 2003 for 4th and 8th-grade math. That means there is inescapable doubt about what caused the gains in those periods most favorable to NCLB. And, let’s not forget, 4th -grade reading saw a downward trend from 2002 to 2003, and 8th-grade reading dropped from 2002-2005. That suggests that NCLB was actually decreasing scores for low-achievers, and one would have to acknowledge that if one were also inclined to give NCLB credit for all gains.

And so, the evidence is absolutely clear in one regard, but in the opposite direction of what Loveless and Petrilli suggest: One thing you definitely cannot say about NCLB is that it has clearly helped low achievers. And yet, they said it anyway!