Archives: February, 2012

War Against the Core

With the release of a new Brookings Institution report today, and one from a consortium of groups last week, resistance to the national-standards offensive seems to be mounting. And even though almost every state in the union has adopted the Common Core, and few are likely to formally undo that, the war against the Core can still be won.

Today’s new front comes in the form of the Brookings Institution’s 2012 Brown Center Report on American Education, which includes three sections attacking rampant misuse of standards and tests. The first focuses on the Common Core, looking at the discernable impacts of state-level standards on achievement, and finding that (a) varying state standards have no meaningful correlation with achievement on the National Assessment of Educational Progress, and (b) there is much greater variation within states than between them, meaning national standards will do little to change big achievement gaps.

The report’s other two sections deal, first, with differences between the Main and Long-Term Trend NAEP – which brings up a central problem of using tests to judge quality without knowing what’s on them – and second, the misues of international exams to tout favorite policy prescriptions. Basically, pundits and analysts love to pick out countries in isolation and finger one or two characteristics of their education systems as key to their success. Some also love to invoke  this stinker that I and others have railed about for years:

In the U.S., advocates of a national curriculum have for years pointed to nations at the top of TIMSS and PISA rankings and argued that because those countries have national curriculums, a national curriculum must be good. The argument is without merit. What the advocates neglect to observe is that countries at the bottom of the international rankings also have a national curriculum.

The report is well worth checking out. The only quibble I have is that it fails to mention what I covered two years ago, when the national standards stealth attack was fully underway: reviewing the national standards research literature, there is no meaningful evidence that national standards lead to better outcomes. It’s great to have more support for this, but we’ve known for a while that the empirical foundation for national standards is balsa-wood strong.

The second report comes from a coalition of the Pioneer Institute, Pacific Research Institute, Federalist Society, and American Principles Project. The Road to a National Curriculum focuses on all the legal violations perpetrated by the federal government to “incentivize” state adoption of the Common Core and connected tests. Much is ground we at Cato have periodically covered, but this report goes into much greater depth on specific statutory violations. It also does nice work debunking standards supporters’ plea that they don’t want to touch curriculum, only standards, as if the whole point of setting standards weren’t to shape curricula. The report goes beyond pointing out just this logical silliness by identifying numerous instances of Education Department officials, or developers of federally funded tests, stating explicitly that their  goal is to shape curricula.

This report is another welcome counter-attack, though it, like the Brookings report, misses something important. In this case, that all federal education action – outside of governing District of Columbia schools, military schools, and enforcing civil rights – is unconstitutional. Stick to that, and none of these other threats materialize.

Unfortunately, it is unlikely that many states that have adopted the Common Core – and all but four have – will officially back out. An effort was made in Alabama to do so, and one is underway in South Carolina, but Alabama’s failed and it’s not clear that there’s huge Palmetto State desire to withdraw.  Many state politicians don’t want to miss out on waivers from No Child Left Behind, which the Obama administration has essentially made contingent on adopting the Common Core, and others would rather not revisit the often contentious standards-adoption process.

That doesn’t mean that any state is truly locked into the Common Core. Formally they are, but like so much government does, states and districts could just ignore the Common Core, keeping it as the official standard but doing something else in practice. The only thing that could really stop them is if Washington were to rewrite federal law to make access to major, annual education funding – not Race to the Top or even waivers, but money from a reauthorized No Child Left Behind – contingent on adopting Common Core, and on performance on one of the two federally funded tests to go with the standards. Then the battle truly would be lost, but we are not there yet – indeed, reauthorization doesn’t seem likely until at least next year – so there is plenty of time for the national standards resistance to grow, and to dismantle the powerful, but ultimately hollow, national standards juggernaut.

Dumb Government Intervention in the Housing Market of the Day

With the continuing bailout of Fannie Mae and Freddie Mac, along with the impending bailout of the Federal Housing Administration, it is easy to think that the federal government has a near monopoly on misguided and harmful housing policies.  Sadly local governments manage, on regular basis, to give the federal government some real competition in terms of just plain dumb.

The last entry is this category comes from Winona, Minnesota.  The great folks at the Institute for Justice summarize Winona’s recent actions pretty well:

“In Winona, only 30 percent of homes on a given block may receive a government-issued license entitling the owner to rent them out.  As soon as 30 percent of the properties on a block obtain rental licenses, no other property on that block may receive a rental license.”

There are just so many reasons why this policy is harmful.  First, if you happen to care about the poor and needy, this policy directly reduces the stock of available rental housing.  It benefits existing landlords at the expense of renters and potential landlords, a policy that is likely to be very regressive.

Second the policy reduces the value of homes that don’t get the license.  By reducing what you can do with a property, you reduce its value.  You also end up leaving foreclosures vacant that could otherwise be rented out.  That may also depress the value of near-by homes.  Not to mention you may increase foreclosures, because absent owners, such as IJ’s client Ethan Dean who owns a home in Winona and is currently serving in Afghanistan, may not be able to cover the mortgage without renting out the property.

The good news is that the Institute for Justice is litigating to have this misguided policy overturned.  Best of luck to them.

Think of All the Jobs We’re Creating in Regulatory Compliance!

Not long ago I took note in this space of how some people conceive of government regulation as a way to create jobs among lawyers, fillers-out of paperwork forms, installers of state-mandated equipment, and so forth. In case you thought I was exaggerating, here’s a new Business Week article arguing in all earnestness that “Regulations Create Jobs, Too.” Given the wounded state of the U.S. economy since the 2008 crash, it laments, “government rules have become politically toxic.” But never fear: “The Obama Administration, girding for election-year attacks on its record, is trying to highlight the upside of government rules.”

To be sure, the article itself is not as bad as its headline, and does make some fair points. It’s true that many politicians sling around the epithet “job-destroying” as if the chief objection to regulatory monstrosities like ObamaCare and Dodd-Frank were their effect in wiping out many existing jobs. In practice over the longer run many such laws shuffle around employment (often from productive uses more highly valued by consumers to those more highly valued by Washington) rather than reduce it permanently. If coal burning is suppressed, perhaps there will be more jobs for those who drill for natural gas, cut down firewood, or manufacture candles for use in blackouts. Overall, the soundest critique of bad regulations is often the most fundamental: that on net they destroy wealth, liberty, property rights, and freedom of consumer choice, in ways that last long beyond the initial pain of disrupted employment.

Of course proponents are at liberty to argue that a given regulation generates benefits that make it worthwhile, and the rest of us will evaluate those arguments on their own merits. But in trying to “highlight the upside of government rules,” it does rather sound as if the Obama administration is hoping to claim some sort of credit for the supposed benefit of creating jobs in the compliance sector. And that claim deserves to be filed under Frederic Bastiat’s broken window fallacy, as explicated in this space and indeed on occasion in the pages of Business Week itself.

Topics:

Is There a Diogenes In the House?

Today POLITICO Arena asks:

Will congressional dysfunction boost President Obama’s campaign against a do-nothing Congress, or will it push voters to wipe the slate clean and push for GOP leadership?

My response:

Congressional dysfunction will boost Obama’s reelection chances only if Republicans fail to frame the issue accurately. There’s no do-nothing Congress, only a do-nothing Senate. After the 2010 mid-term elections the new Republican House passed numerous measures aimed at addressing our deficits and debt – none terribly far-reaching, unfortunately – but they’ve all died in Harry Reid’s do-nothing Democratic Senate, which has shown itself incapable of cutting anything.

That means, understandably, that we’re in a holding pattern until next year. As we’re seeing with the extension of the payroll tax holiday, House Republicans are no longer willing to pass responsible rollbacks of government – however parsimonious – only to be demagoged by the president and his party. That game is over. The only questions, as we roll toward November, are whether Republicans can frame the issue for what it is – don’t hold your breath – and whether enough voters will see through the other side’s demagoguery to elect a president and a Congress that will begin to seriously address the out-of-control government we have today. That was done in many of the states in our last elections. It can be done at the national level too, but only with the right messages.

Should a Congress that Doesn’t Understand Math Regulate Cybersecurity?

There’s a delicious irony in some of the testimony on cybersecurity that the Senate Homeland Security and Governmental Affairs Committee will hear today (starting at 2:30 Eastern — it’s unclear from the hearing’s page whether it will be live-streamed). Former National Security Agency general counsel Stewart Baker flubs a basic mathematical concept.

If Congress credits his testimony, is it really equipped to regulate the Internet in the name of “cybersecurity”?

Baker’s written testimony (not yet posted) says, stirringly, “Our vulnerabilities, and their consequences, are growing at an exponential rate.” He’s stirring cake batter, though. Here’s why.

Exponential growth occurs when the growth rate of the value of a mathematical function is proportional to the function’s current value. It’s nicely illustrated with rabbits. If in week one you have two rabbits, and in week two you have four, you can expect eight rabbits in week three and sixteen in week four. That’s exponential growth. The number of rabbits each week dictates the number of rabbits the following week. By the end of the year, the earth will be covered in rabbits. (The Internet provides us an exponents calculator, you see. Try calculating 2^52.)

The vulnerabilities of computers, networks, and data may be growing. But such vulnerabilities are not a function of the number of transistors that can be placed on an integrated circuit. Baker is riffing on Moore’s Law, which describes long-term exponential growth in computing power.

Instead, vulnerabilities will generally be a function of the number of implementations of information technology. A new protocol may open one or more vulnerabilities. A new piece of software may have one or more vulnerabilities. A new chip design may have one or more vulnerabilities. Interactions between various protocols and pieces of hardware and software may create vulnerabilities. And so on. At worst, in some fields of information technology, there might be something like cubic growth in vulnerabilities, but it’s doubtful that such a trend could last.

Why? Because vulnerabilities are also regularly closing. Protocols get ironed out. Software bugs get patched. Bad chip designs get fixed.

There’s another dimension along which vulnerabilities are also probably growing. This would be a function of the “quantity” of information technology out there. If there are 10,000 instances of a given piece of software in use out there with a vulnerability, that’s 10,000 vulnerabilities. If there are 100,000 instances of it, that’s 10 times more vulnerabilities—but that’s still linear growth, not exponential growth. The number of vulnerabilities grows in direct proportion to the number of instances of the technology.

Ignore the downward pressure on vulnerabilities, though, and put growth in the number of vulnerabilities together with the growth in the propogation of vulnerabilities. Don’t you have exponential growth? No. You still have linear growth. The growth in vulnerability from new implementations of information technology and new instances of that technology multiply. Across technologies, they sum. They don’t act as exponents to one another.

Baker uses “vulnerability” and “threat” interchangeably, but careful thinkers about risk wouldn’t do this, I don’t think. Vulnerability is the existence of weakness. Threat is someone or something animated to exploit it (a “hazard” if that thing is inanimate). Vulnerabilities don’t really matter, in fact, if there isn’t anyone to exploit them. Do you worry about the number of hairs on your body being a source of pain? No, because nobody is going to come along and pluck them all. You need to have a threat vector, or vulnerability is just idle worry.

Now, threats can multiply quickly online. When exploits to some vulnerabilities are devised, their creators can propogate them quickly to others, such as “script kiddies” who will run such exploits everywhere they can. Hence, the significance of the “zero-day threat” and the importance of patching software promptly.

As to consequence, Baker cites examples of recent hacks on HBGary, RSA, Verisign, and DigiNotar, as well as weakness in industrial control systems. This says nothing about growth rates, much less how the number of hacks in the last year forms the basis for more in the next. If some hacks allow other hacks to be implemented, that, again, would be a multiplier, not an exponent. (Generally, these most worrisome hacks can’t be executed by script kiddes, so they are not soaring in numerosity. You know what happens to consequential hacks that do soar in numerosity? They’re foreclosed by patches.)

Vulnerability and threat analyses are inputs into determinations about the likelihood of bad things happening. The next step is to multiply that likelihood against consequence. The product is a sense of how important a given risk is. That’s risk assessment.

But Baker isn’t terribly interested in acute risk management. During his years as Assistant Secretary for Policy at the Department of Homeland Security, the agency didn’t do the risk management work that would validate or invalidate the strip-search machine/intrusive pat-down policy (and it still hasn’t, despite a court order). The bill he’s testifying in support of wouldn’t manage cybersecurity risks terribly well, either, for reasons I’ll articulate in a forthcoming post.

Do your representatives in Congress get the math involved here? Do they know the difference between exponential growth and linear growth? Do they “get” risk management? Chances are they don’t. They may even parrot the “statistic” that Baker is putting forth. How well equipped do you suppose a body like that is for telling you how to do your cybersecurity?

Hating the Rich, and Other Curiosities

Readers of Cato-at-Liberty should also check out our latest blog, Free Thoughts on Libertarianism.org. Lots of interesting stuff there.  Like Aaron Powell on “Why We Get Mad at (some kinds of) Rich People.” And Jonathan Blanks on “Black History and Liberty.” And Jason Kuznicki on NPR and Ayn Rand. And Aaron’s profound disappointment with Sam Harris’s latest book.

Not to mention, of course, an ever-increasing amount of other great material on Libertarianism.org, including

  • George Smith’s essays on libertarian ideas
  • “Exploring Liberty,” a series of original videos introducing libertarianism
  • the complete text of the classic journal Literature of Liberty
  • vintage, never-before-seen videos from people like Hayek, Friedman, Rothbard, and most recently David Kelley from 1991 on the faultlines in the Objectivist movement
  • essays on major libertarian figures from Lord Acton to Mary Wollstonecraft
  • and more!

Chávez’s Electoral Fraud Cushion

The onslaught against Henrique Capriles Radonsky by Venezuelan state-run media has begun after his decisive victory in Sunday’s presidential primary. Capriles is now the nominee of the opposition coalition and he will face Hugo Chávez in October’s presidential election. As the Wall Street Journal reports, the vicious attacks against Capriles include, among other things, insinuations that he was a homosexual and that he is a Zionist agent.

This election will not be a fair one. Not only does Chávez control most of the Venezuelan media, but his government is also dramatically increasing spending on popular social programs. About 8.5 million Venezuelans already receive some kind of permanent income or assistance from the government (4 million of them are public employees). The Chávez regime threatens and intimidates those who receive government handouts and dare to support the opposition. Moreover, since voting is electronic in Venezuela, many people fear—perhaps with good reason—that their votes aren’t secret. The government tacitly encourages these perceptions.

But that’s not the end of the story. Chávez also controls Venezuela’s National Electoral Council. Due to the inability of the opposition to monitor every voting station in the country, the stated results of the vote may not be accurate. The Electoral Council usually takes longer than is necessary to tabulate voting results from electronic systems, which has raised concerns of fraudulent activity.

A main concern is the electoral registry, as documented by Gustavo Coronel in a Cato study back in 2006. Coronel wrote that an independent analysis of the electoral registry found many irregularities:

such as the existence of 39,000 voters over one hundred years old. This is a number equal to that of the same age group in the United States, where the population is 10 times greater. Of these 39,000 people, 17,000 were born in the 19th century, and one is 175 years old and still working! Nineteen thousand voters were born the same day and year in the state of Zulia. There are thousands of people sharing the same address.

So on top of the support of his followers (some enthusiastic, others intimidated), which fluctuates around 45 percent of the population, Chávez can also rely on a margin of error due to electoral fraud if he doesn’t get enough votes for his reelection. I’ve talked to some Venezuelans who say this margin can be as high as eight percentage points. That is, if the election is decided by less than that (very likely the case), Chávez can doctor the results in his favor.

The opposition promises to have people in every single voting station in the country watching the vote. The National Electoral Council will probably bar international observers from monitoring the election. This sets up the potential for conflicting results from the opposition and the National Electoral council. What would happen next is anyone’s guess.