Tag: alarmism

Reflections on Rapid Response to Unjustified Climate Alarm

The Cato Institute’s Center for the Study of Science today kicks off its rapid response center that will identify and correct inappropriate and generally bizarre claims on behalf of climate alarm. I wish them luck in this worthy enterprise, but more will surely be needed to deal with this issue.

To be sure, there is an important role for such a center. It is not to convince the ‘believers.’ Nor do I think that there is any longer a significant body of sincere and intelligent individuals who are simply trying to assess the evidence. As far as I can tell, the issue has largely polarized that relatively small portion of the population that has chosen to care about the issue. The remainder quite reasonably have chosen to remain outside the polarization. Thus the purpose of a rapid response Center will be to reassure those who realize that this is a fishy issue, that there remain scientists who are still concerned with the integrity of science. There is also a crucial role in informing those who wish to avoid the conflict as to what is at stake. While these are important functions, there are other issues that I feel a think tank ought to consider. Moreover, there is a danger that rapid response to trivial claims lends unwarranted seriousness to these claims. 

Climate alarm belongs to a class of issues characterized by a claim for which there is no evidence, that nonetheless appeals strongly to one or more interests or prejudices. Once the issue is adopted, evidence becomes irrelevant. Instead, the believer sees what he believes. Anything can serve as a supporting omen. Three very different previous examples come to mind (though there are many more examples that could be cited): Malthus’ theory of overpopulation, social Darwinism and the Dreyfus Affair. Although each of these issues engendered opposition, only the Dreyfus Affair led to widespread societal polarization. More commonly, only the ‘believers’ are sufficiently driven to form a movement. We will briefly review these examples (though each has been subject to book length analyses), but the issue of climate alarm is somewhat special in that it appeals to a sizeable number of interests, and has strong claims on the scientific community. It also has the potential to cause exceptional harm to an unprecedented number of people. This has led to persistent opposition amidst widespread lack of interest. However, all these issues are characterized by profound immorality pretending to virtue. 

Curricula with an Agenda? It Ain’t Just Big Coal

Today the Washington Post has a big story on efforts by the coal industry to get public schools to teach positive things about — you guessed it — coal. The impetus for the article is no doubt a recent kerfuffle over education mega-publisher Scholastic sending schools free copies of the industry-funded lesson plan “The United States of Energy.” Many parents and environmentalists were upset over businesses putting stealthy moves on kids, and Scholastic eventually promised to cease publication of the plan.

Loaded curricula designed to coerce specific sympathies from children, however, hardly come just from industry, as the Post story notes. Indeed, as I write in the new Cato book Climate Coup: Global Warming’s Invasion of Our Government and Our Lives, much of the curricular material put out at least on climate change is decidedly alarmist in nature, and is funded by you, the taxpayer. In other words, lots of people are trying to use the schools to push their biases on your kids, which is an especially dangerous thing considering how unsettled, uncertain, and multi-sided so many issues are.

In light of the huge question marks that exist in almost all subjects that schools address, the best education system is the one that is most decentralized, in which ideas can compete rather than having one (very likely flawed) conclusion imposed as orthodoxy. And it would be a system in which no level of government — either district, state, or federal — would decide what view is correct, or what should be taught based on the existence of some supposed consensus, as if “consensus” were synonymous with “absolute truth.” What is truth should not be decided by who has the best lobbyists or most political weight, nor should children be forced to learn what government simply deems to be best.

Of course, there are some people who will decide that they are so correct about something that it would be abusive not to have government force children to learn it. If their conclusion is so compelling and obvious, however, no coercion should be necessary to get people to teach it to their children — it should be overwhelmingly clear. More importantly, if there is controversy, efforts to impose a singular view are likely to fail not just with the children of unbelievers, but for many of the children whose parents share the view. As significant anecdotal evidence over the teaching of human origins has stongly suggested — and new empirical work has substantiated — when public schools are confronted with controversial issues, they tend to avoid them altogether rather than teach any side. In other words, efforts at compulsion don’t just fail, they hurt everyone.

Educational freedom, then, is the only solution to the curricular problem. If you want full power to avoid the imposition of unwanted materials on your children, you must be able to choose schools. And if you want to ensure that your kids get the instruction you think every child should have, everyone else must have that ability, too.

Some Thinking on “Cyber”

Last week, I had the opportunity to testify before the House Science Committee’s Subcommittee on Technology and Innovation on the topic of “cybersecurity.” I have been reluctant to opine on it because of its complexity, but I did issue a short piece a few months ago arguing against government-run cybersecurity. That piece was cited prominently in the White House’s “Cyberspace Policy Review” and – blamo! – I’m a cybersecurity expert.

Not really – but I have been forming some opinions at a high level of generality that are worth making available. They can be found in my testimony, but I’ll summarize them briefly here.

First, “cybersecurity” is a term so broad as to be meaningless. Yes, we are constructing a new “space” analogous to physical space using computers, networks, sensors, and data, but we can no more secure “cyberspace” in its entirety than we can secure planet Earth and the galaxy. Instead, we secure the discrete things that are important to us – houses, cars, buildings, power lines, roads, private information, money, and so on. And we secure these things in thousands of different ways. We should secure “cyberspace” the same way – thousands of different ways.

By “we,” of course, I don’t mean the collective. I mean that each owner or controller of a prized thing should look out for its security. It’s the responsibility of designers, builders, and owners of houses, for exmple, to ensure that they properly secure the goods kept inside. It’s the responsibility of individuals to secure the information they wish to keep private and the money they wish to keep. It is the responsibility of network operators to secure their networks, data holders to secure their data, and so on.

Second, “cyber” threats are being over-hyped by a variety of players in the public policy area. Invoking “cyberterrorism” or “cyberwar” is near-boilerplate in white papers addressing government cybersecurity policy, but there is very limited strategic logic to “cyberwarfare” (aside from attacking networks during actual war-time), and “cyberterrorism” is a near-impossibility. You’re not going to panic people – and that’s rather integral to terrorism – by knocking out the ATM network or some part of the power grid for a period of time.

(We weren’t short of careless discussions about defending against “cyber attack,” but L. Gordon Crovitz provided yet another example in yesterday’s Wall Street Journal. As Ben Friedman pointed out, Evgeny Morozov has the better of it in the most recent Boston Review.)

This is not to deny the importance of securing digital infrastructure; it’s to say that it’s serious, not scary. Precipitous government cybersecurity policies – especially to address threats that don’t even have a strategic logic – would waste our wealth, confound innovation, and threaten civil liberties and privacy.

In the cacophony over cybersecurity, an important policy seems to be getting lost: keeping true critical infrastructure offline. I noted Senator Jay Rockefeller’s (D-WV) awesomely silly comments about cybersecurity a few months ago. They were animated by the premise that all the good things in our society should be connected to the Internet or managed via the Internet. This is not true. Removing true critical infrastructure from the Internet takes care of the lion’s share of the cybersecurity problem.

Since 9/11, the country has suffered significant “critical-infrastructure inflation” as companies gravitate to the special treatments and emoluments government gives owners of “critical” stuff. If “criticality” is to be a dividing line for how assets are treated, it should be tightly construed: If the loss of an asset would immediately and proximately threaten life or health, that makes it critical. If danger would materialize over time, that’s not critical infrastructure – the owners need to get good at promptly repairing their stuff. And proximity is an important limitation, too: The loss of electric power could kill people in hospitals, for example, but ensuring backup power at hospitals can intervene and relieve us of treating the entire power grid as “critical infrastructure,” with all the expense and governmental bloat that would entail.

So how do we improve the state of cybersecurity? It’s widely believed that we are behind on it. Rather than figuring out how to do cybersecurity – which is impossible – I urged the committee to consider what policies or legal mechanisms might get these problems figured out.

I talked about a hierarchy of sorts. First, contract and contract liability. The government is a substantial purchaser of technology products and services – and highly knowledgeable thanks to entities like the National Institutes of Standards and Technology. Yes, I would like it to be a smaller purchaser of just about everything, but while it is a large market actor, it can drive standards and practices (like secure settings by default) into the marketplace that redound to the benefit of the cybersecurity ecology. The government could also form contracts that rely on contract liability – when products or services fail to serve the purposes for which they’re intended, including security – sellers would lose money. That would focus them as well.

A prominent report by a working group at the Center for Strategic and International Studies – co-chaired by one of my fellow panelists before the Science Committee last week, Scott Charney of Microsoft – argued strenuously for cybersecurity regulation.

But that begs the question of what regulation would say. Regulation is poorly suited to the process of discovering how to solve new problems amid changing technology and business practices.

There is some market failure in the cybersecurity area. Insecure technology can harm networks and users of networks, and these costs don’t accrue to the people selling or buying technology products. To get them to internalize these costs, I suggested tort liability rather than regulation. While courts discover the legal doctrines that unpack the myriad complex problems with litigating about technology products and services, they will force technology sellers and buyers to figure out how to prevent cyber-harms.

Government has a role in preventing people from harming each other, of course, and the common law could develop to meet “cyber” harms if it is left to its own devices. Tort litigation has been abused, and the established corporate sector prefers regulation because it is a stable environment for them, it helps them exclude competition, and they can use it to avoid liability for causing harm, making it easier to lag on security. Litigation isn’t preferable, and we don’t want lots of it – we just want the incentive structure tort liability creates.

As the distended policy issue it is, “cybersecurity” is ripe for shenanigans. Aggressive government agencies are looking to get regulatory authority over the Internet, computers, and software. Some of them wouldn’t mind getting to watch our Internet traffic, of course. Meanwhile, the corporate sector would like to use government to avoid the hot press of market competition, while shielding itself from liability for harms it may cause.

The government must secure its own assets and resources – that’s a given. Beyond that, not much good can come from government cybersecurity policy, except the occassional good, long blog post.

Morozov vs. Cyber-Alarmism

I’m no information security expert, but you don’t have to be to realize that an outbreak of cyber-alarmism afflicts American pundits and reporters.

As Jim Harper and Tim Lee have repeatedly argued (with a little help from me), while the internet created new opportunities for crime, spying, vandalism and military attack, the evidence that the web opens a huge American national security vulnerability comes not from events but from improbable what-ifs. That idea is, in other words, still a theory. Few pundits bother to point out that hackers don’t kill, that cyberspies don’t seem to have stolen many (or any?) important American secrets, and that our most critical infrastructure is not run on the public internet and thus is relatively invulnerable to cyberwhatever. They never note that to the extent that future wars have an online component, this redounds to the U.S. advantage, given our technological prowess.  Even the Wall Street Journal and New York Times recently published breathless stories exaggerating our vulnerability to online attacks and espionage.

So it’s good to see that the July/ August Boston Review has a terrific article by Evgeny Morozov taking on the alarmists. He provides not only a sober net assessment of the various worries categorized by the vague modifier “cyber” but even offers a theory about why hype wins.

Why is there so much concern about “cyber-terrorism”? Answering a question with a question: who frames the debate? Much of the data are gathered by ultra-secretive government agencies—which need to justify their own existence—and cyber-security companies—which derive commercial benefits from popular anxiety. Journalists do not help. Gloomy scenarios and speculations about cyber-Armaggedon draw attention, even if they are relatively short on facts.

I agree.