Tag: data

Feds Giveth Jobs & Cars, Then Taketh Away Again

The bad news this morning on the impact of both the federal stimulus and the Cash for Clunkers program should not come as a surprise to anyone who has paid attention to the history of government intervention in the economy.

New data that the jobs created by the stimulus have been overstated by thousands is compelling, but it’s really a secondary issue. The primary issue is that the government cannot “create” anything without hurting something else. To “create” jobs, the government must first extract wealth from the economy via taxation, or raise the money by issuing debt. Regardless of whether the burden is borne by present or future taxpayers, the result is the same: job creation and economic growth are inhibited.

At the same time the government is taking undeserved credit for “creating jobs,” a new analysis of the Cash for Clunkers program by Edmunds.com shows that most cars bought with taxpayer help would have been purchased anyhow. The same analysis finds the post-Clunker car sales would have been higher in the absence of the program, which proves that the program merely altered the timing of auto purchases.

Once again, the government claims to have “created” economic growth, but the reality is that Cash for Clunkers had no positive long-term effect and actually destroyed wealth in the process.

Right now businesses and entrepreneurs are hesitant to make investments or add new workers because they’re worried about what Washington’s interventions could mean for their bottom lines. The potential for higher taxes, health care mandates, and costly climate change legislation are all being cited by businesspeople as reasons why further investment or hiring is on hold. Unless this “regime uncertainty” subsides, the U.S. economy could be in for sluggish growth for a long time to come.

For more on the topic of regime uncertainty and economic growth, please see the Downsizing Government blog.

Lies Our Professors Tell Us

On Sunday, the Washington Post ran an op-ed by the chancellor and vice chancellor of the University of California, Berkeley, in which the writers proposed that the federal government start pumping money into a select few public universities. Why? On the constantly repeated but never substantiated assertion that state and local governments have been cutting those schools off.

As I point out in the following, unpublished letter to the editor, that is what we in the business call “a lie:”

It’s unfortunate that officials of a taxpayer-funded university felt the need to deceive in order to get more taxpayer dough, but that’s what UC Berkeley’s Robert Birgeneau and Frank Yeary did. Writing about the supposedly dire financial straits of public higher education (“Rescuing Our Public Universities,” September 27), Birgeneau and Yeary lamented decades of “material and progressive disinvestment by states in higher education.” But there’s been no such disinvestment, at least over the last quarter-century. According to inflation-adjusted data from the State Higher Education Executive Officers, in 1983 state and local expenditures per public-college pupil totaled $6,478. In 2008 they hit $7,059. At the same time, public-college enrollment ballooned from under 8 million students to over 10 million. That translates into anything but a “disinvestment” in the public ivory tower, no matter what its penthouse residents may say.

Since letters to the editor typically have to be pretty short I left out readily available data for California, data which would, of course, be most relevant to the destitute scholars of Berkeley. Since I have more space here, let’s take a look: In 1983, again using inflation-adjusted SHEEO numbers, state and local governments in the Golden State provided $5,963 per full-time-equivalent student. In 2008, they furnished $7,177, a 20 percent increase. And this while enrollment grew from about 1.2 million students to 1.7 million! Of course, spending didn’t go up in a straight line – it went up and down with the business cycle – but in no way was there anything you could call appreciable ”disinvestment.” 

Unfortunately, higher education is awash in lies like these. Therefore, our debunking will not stop here! On Tuesday, October 6, at a Cato Institute/Pope Center for Higher Education Policy debate, we’ll deal with another of the ivory tower’s great truth-defying proclamations: that colleges and universities raise their prices at astronomical rates not because abundant, largely taxpayer-funded student aid makes doing so easy, but because they have to!

It’s a doozy of a declaration that should set off a doozy of a debate! To register to attend what should be a terrific event, or just to watch online, follow this link.

I hope to see you there, and remember: Don’t believe everything your professors tell you, especially when it impacts their wallets!

Debt Aggravates Spending Disease

USA Today’s Dennis Cauchon reports that ”state governments are rushing to borrow money to take advantage of cheap and plentiful credit at a time when tax collections are tumbling.” That will allow them to “avoid some painful spending cuts,” Cauchon notes, but it will sadly impose more pain on taxpayers down the road.

When politicians have the chance to act irresponsibly, they will act irresponsibly. Give them low interest rates and they go on a borrowing binge. The result is that they are in over their heads with massive piles of bond debt on top of the huge unfunded obligations they have built up for state pension and health care plans.

The chart shows that total state and local government debt soared 93 percent this decade. It jumped from $1.2 trillion in 2000 to $2.3 trillion by the second quarter of 2009, according to Federal Reserve data (Table D.3).

Government debt has soared during good times and bad. During recessions, politicians say that they need to borrow to avoid spending cuts. But during boomtimes, such as from 2003 to 2008, they say that borrowing makes sense because an expanding economy can handle a higher debt load. I’ve argued that there is little reason for allowing state and local government politicians to issue bond debt at all.

Unfortunately, the political urge to spend has resulted in the states shoving a massive pile of debt onto future taxpayers at the same time that they have built up huge unfunded obligations for worker retirement plans.

We’ve seen how uncontrolled debt issuance has encouraged spending sprees at the federal level. Sadly, it appears that the same debt-fueled spending disease has spread to the states and the cities.

Eye of Neutrality, Toe of Frog

FCC Chairman Julius GenachowskiI won’t go on at too much length about FCC Chairman Julius Genachowski’s speech at Brookings announcing his intention to codify the principle of “net neutrality” in agency rules—not because I don’t have thoughts, but because I expect it would be hard to improve on my colleague Tim Lee’s definitive paper, and because there’s actually not a whole lot of novel substance in the speech.

The digest version is that the open Internet is awesome (true!) and so the FCC is going to impose a “nondiscrimination” obligation on telecom providers—though Genachowski makes sure to stress this won’t be an obstacle to letting the copyright cops sniff through your packets for potentially “unauthorized” music, or otherwise interfere with “reasonable” network management practices.

And what exactly does that mean?

Well, they’ll do their best to flesh out the definition of “reasonable,” but in general they’ll “evaluate alleged violations…on a case-by-case basis.” Insofar as any more rigid rule would probably be obsolete before the ink dried, I guess that’s somewhat reassuring, but it absolutely reeks of the sort of ad hoc “I know it when I see it” standard that leaves telecoms wondering whether some innovative practice will bring down the Wrath of Comms only after resources have been sunk into rolling it out. Apropos of which, this is the line from the talk that really jumped out at me:

This is not about protecting the Internet against imaginary dangers. We’re seeing the breaks and cracks emerge, and they threaten to change the Internet’s fundamental architecture of openness. [….] This is about preserving and maintaining something profoundly successful and ensuring that it’s not distorted or undermined. If we wait too long to preserve a free and open Internet, it will be too late.

To which I respond: Whaaaa? What we’ve actually seen are some scattered and mostly misguided  attempts by certain ISPs to choke off certain kinds of traffic, thus far largely nipped in the bud by a combination of consumer backlash and FCC brandishing of existing powers. To the extent that packet “discrimination” involves digging into the content of user communications, it may well run up against existing privacy regulations that require explicit, affirmative user consent for such monitoring. In any event, I’m prepared to believe the situation could worsen. But pace Genachowski, it’s really pretty mysterious to me why you couldn’t start talking about the wisdom—and precise character—of some further regulatory response if and when it began to look like a free and open Internet were in serious danger.

If anything, it seems to me that the reverse is true: If you foreclose in advance the possibility of cross-subsidies between content and network providers, you probably never get to see the innovations you’ve prevented, while discriminatory routing can generally be detected, and if necessary addressed, if and when it occurs.  And the worst possible time to start throwing up barriers to a range of business models, it seems to me, is exactly when we’re finally seeing the roll-out of the next-generation wireless networks that might undermine the broadband duopoly that underpins the rationale for net neutrality in the first place. In a really competitive broadband market, after all, we can expect deviations from neutrality that benefit consumers to be adopted while those that don’t are punished by the market. I’d much rather see the FCC looking at ways to increase competition than adopt regulations that amount to resigning themselves to a broadband duopoly.

Instead of giving wireline incumbents a new regulatory stick to whack new entrants with, the FCC could focus on facilitating exploitation of “white spaces” in the broadcast spectrum or experimenting with spectral commons to enable user-owned mesh networks. The most perverse consequence I can imagine here is that you end up pushing spectrum owners to cordon off bandwidth for application-specific private networks—think data and cable TV flowing over the same wires—instead of allocating capacity to the public Internet, where they can’t prioritize their own content streams.  It just seems crazy to be taking this up now rather than waiting to see how these burgeoning markets shake out.

Topics:

Public Information and Public Choice

MalamudOne of the high points of last week’s Gov 2.0 Summit was transparency champion Carl Malamud’s speech on the history of public access to government information – ending with a clarion call for  government documents, data, and deliberation to be made more freely available online. The argument is a clear slam-dunk on simple grounds of fairness and democratic accountability. If we’re going to be bound by the decisions made by regulatory agencies and courts, surely at a bare minimum we’re all entitled to know what those decisions are and how they were arrived at. But as many of the participants at the conference stressed, it’s not enough for the data to be available – it’s important that it be free, and in a machine readable form. Here’s one example of why, involving the PACER system for court records:

The fees for bulk legal data are a significant barrier to free enterprise, but an insurmountable barrier for the public interest. Scholars, nonprofit groups, journalists, students, and just plain citizens wishing to analyze the functioning of our courts are shut out. Organizations such as the ACLU and EFF and scholars at law schools have long complained that research across all court filings in the federal judiciary is impossible, because an eight cent per page charge applied to tens of millions of pages makes it prohibitive to identify systematic discrimination, privacy violations, or other structural deficiencies in our courts.

If you’re thinking in terms of individual cases – even those involving hundreds or thousands of pages of documents – eight cents per page might not sound like a very serious barrier. If you’re trying to do a meta-analysis that looks for patterns and trends across the body of cases as a whole, not only is the formal fee going to be prohibitive in the aggregate, but even free access won’t be much help unless the documents are in a format that can be easily read and processed by computers, given the much higher cost of human CPU cycles. That goes double if you want to be able to look for relationships across multiple different types of documents and data sets.

All familiar enough to transparency boosters. Is there a reason proponents of limited government ought to be especially concerned with this, beyond a general fondness for openness? Here’s one reason.  Public choice theorists often point to the problem of diffuse costs and concentrated benefits as a source of bad policy. In brief, a program that inefficiently transfers a million dollars from millions of taxpayers to a few beneficiaries will create a million dollar incentive for the beneficiaries to lobby on its behalf, while no individual taxpayer has much motivation to expend effort on recovering his tiny share of the benefit of axing the program. And political actors have similarly strong incentives to create identifiable constituencies who benefit from such programs and kick back those benefits in the form of either donations or public support. What Malamud and others point out is that one thing those concentrated beneficiaries end up doing is expending resources remaining fairly well informed about what government is doing – what regulations and expenditures are being contemplated – in order to be able to act for or against them in a timely fashion.

Now, as the costs of organizing dispersed people get lower thanks to new technologies, we’re seeing increasing opportunities to form ad hoc coalitions supporting and opposing policy changes with more dispersed costs and benefits – which is good, and works to erode the asymmetry that generates a lot of bad policy. But incumbent constituencies have the advantage of already being organized and able to invest resources in identifying policy changes that implicate their interests. If ten complex regulations are under consideration, and one creates a large benefit to an incumbent constituent while imposing smaller costs on a much larger group of people, it’s a great advantage if the incumbent is aware of the range of options in advance, and can push for their favored option, while the dispersed losers only become cognizant of it when the papers report on the passage of a specific rule and slowly begin teasing out its implications.

Put somewhat more briefly: Technology that lowers organizing costs can radically upset a truly pernicious public choice dynamic, but only if the information necessary to catalyze the formation of a blocking coalition is out there in a form that allows it to be sifted and analyzed by crowdsourced methods first. Transparency matters less when organizing costs are high, because the fight is ultimately going to be decided by a punch up between large, concentrated interest groups for whom the cost of hiring experts to learn about and analyze the implications of potential policy changes is relatively trivial. As transaction costs fall, and there’s potential for spontaneous, self-identifying coalitions to form, those information costs loom much larger. The timely availability – and aggregability – of information about the process of policy formation and its likely consequences then suddenly becomes a key determinant of the power of incumbent constituencies to control policy and extract rents.

Early Education: Lots of Noise, Little to Hear

This weekend, the Detroit News ran a letter to the editor taking issue with a piece I wrote about the Student Aid and Fiscal Responsbility Act (SAFRA). Strangley, though the main part of SAFRA deals with higher education loans; the bill contains new spending all over the education map; and I made no specific mention of early-childhood education in my piece (though there is an early-ed component in the bill); the letter is all about pre-K education.

That the pre-K pushers even saw my op-ed as something to write about illustrates how very agressive they are. Unfortunately, the letter also demonstrates how dubious is the message that they are so loudly and energetically proclaiming. Here’s a telling bit:

Economists, business leaders and scientists all know from cold, hard data that high-quality early education provides a significant return on investment in terms of education, social and health outcomes.

Whether pre-K education is worth even a dime all depends on how you define “high quality.” As Adam Schaeffer lays out in his new early-education policy analysis — and Andrew Coulson reiterates in an exchange with economist James Heckman — the “cold, hard data” say only that a few programs seem to work, and most don’t. Pronouncements about the huge returns on pre-K investment are almost always based on very small, hyper-intensive programs that would be all but impossible to replicate on a large scale. And the programs that do function on a large scale? As Adam lays out, they provide little to no return on investment.

The early-education crowd is very good at getting out its message. Too bad the message itself is so darn suspect.

Obama to Seek Cap on Federal Pay Raises

USA Today reports that President Obama is seeking a cap on federal pay raises:

President Obama urged Congress Monday to limit cost-of-living pay raises to 2% for 1.3 million federal employees in 2010, extending an income squeeze that has hit private workers and threatens Social Security recipients and even 401(k) investors.

…The president’s action comes when consumer prices have fallen 2.1% in the 12 months ending in July, because of a massive drop in energy prices. The recession has taken an even tougher toll on private-sector wages, which rose only 1.5% for the year ended in June — the lowest increase since the government started keeping track in 1980. Private-sector workers also have been subject to widespread layoffs and furloughs.

Last week, economist Chris Edwards discussed data from the Bureau of Economic research that revealed the large gap between the average pay of federal employees and private workers. His call to freeze federal pay “for a year or two” received attention and criticism, (FedSmith, GovExec, Federal Times, Matt Yglesias, Conor Clarke) to which he has responded.

As explained on CNN earlier this year, the pay gap between federal and private workers has been widening for some time now: