Tag: cyber attack

This “Cyberwar” Is a Cybersnooze

The AP and other sources have been reporting on a “cyberattack” affecting South Korea and U.S. government Web sites, including the White House, Secret Service and Treasury Department.

Allegedly mounted by North Korea, this attack puts various “cyber” threats in perspective. Most Americans will probably not know about it, and the ones who do will learn of it by reading about it. Only a tiny percentage of people will notice the absence of the Web sites attacked. (An update to the story linked above notes that several agencies and entities “blunted” the attacks, as well-run Web sites will do.)

This is the face of “cyberwar,” which has little strategic value and little capacity to do real damage. This episode also underscores the fact that “cyberterrorism” cannot exist – because this kind of attack isn’t terrifying.

As I said in my recent testimony before the House Science Committee, it is important to secure web sites, data, and networks against all threats, but this can be done and is being done methodically and successfully – if imperfectly – by the distributed owners and controllers of all our nation’s “cyber” assets. Hyping threats like “cyberwar” and “cyberterror” is not helpful.

Some Thinking on “Cyber”

Last week, I had the opportunity to testify before the House Science Committee’s Subcommittee on Technology and Innovation on the topic of “cybersecurity.” I have been reluctant to opine on it because of its complexity, but I did issue a short piece a few months ago arguing against government-run cybersecurity. That piece was cited prominently in the White House’s “Cyberspace Policy Review” and – blamo! – I’m a cybersecurity expert.

Not really – but I have been forming some opinions at a high level of generality that are worth making available. They can be found in my testimony, but I’ll summarize them briefly here.

First, “cybersecurity” is a term so broad as to be meaningless. Yes, we are constructing a new “space” analogous to physical space using computers, networks, sensors, and data, but we can no more secure “cyberspace” in its entirety than we can secure planet Earth and the galaxy. Instead, we secure the discrete things that are important to us – houses, cars, buildings, power lines, roads, private information, money, and so on. And we secure these things in thousands of different ways. We should secure “cyberspace” the same way – thousands of different ways.

By “we,” of course, I don’t mean the collective. I mean that each owner or controller of a prized thing should look out for its security. It’s the responsibility of designers, builders, and owners of houses, for exmple, to ensure that they properly secure the goods kept inside. It’s the responsibility of individuals to secure the information they wish to keep private and the money they wish to keep. It is the responsibility of network operators to secure their networks, data holders to secure their data, and so on.

Second, “cyber” threats are being over-hyped by a variety of players in the public policy area. Invoking “cyberterrorism” or “cyberwar” is near-boilerplate in white papers addressing government cybersecurity policy, but there is very limited strategic logic to “cyberwarfare” (aside from attacking networks during actual war-time), and “cyberterrorism” is a near-impossibility. You’re not going to panic people – and that’s rather integral to terrorism – by knocking out the ATM network or some part of the power grid for a period of time.

(We weren’t short of careless discussions about defending against “cyber attack,” but L. Gordon Crovitz provided yet another example in yesterday’s Wall Street Journal. As Ben Friedman pointed out, Evgeny Morozov has the better of it in the most recent Boston Review.)

This is not to deny the importance of securing digital infrastructure; it’s to say that it’s serious, not scary. Precipitous government cybersecurity policies – especially to address threats that don’t even have a strategic logic – would waste our wealth, confound innovation, and threaten civil liberties and privacy.

In the cacophony over cybersecurity, an important policy seems to be getting lost: keeping true critical infrastructure offline. I noted Senator Jay Rockefeller’s (D-WV) awesomely silly comments about cybersecurity a few months ago. They were animated by the premise that all the good things in our society should be connected to the Internet or managed via the Internet. This is not true. Removing true critical infrastructure from the Internet takes care of the lion’s share of the cybersecurity problem.

Since 9/11, the country has suffered significant “critical-infrastructure inflation” as companies gravitate to the special treatments and emoluments government gives owners of “critical” stuff. If “criticality” is to be a dividing line for how assets are treated, it should be tightly construed: If the loss of an asset would immediately and proximately threaten life or health, that makes it critical. If danger would materialize over time, that’s not critical infrastructure – the owners need to get good at promptly repairing their stuff. And proximity is an important limitation, too: The loss of electric power could kill people in hospitals, for example, but ensuring backup power at hospitals can intervene and relieve us of treating the entire power grid as “critical infrastructure,” with all the expense and governmental bloat that would entail.

So how do we improve the state of cybersecurity? It’s widely believed that we are behind on it. Rather than figuring out how to do cybersecurity – which is impossible – I urged the committee to consider what policies or legal mechanisms might get these problems figured out.

I talked about a hierarchy of sorts. First, contract and contract liability. The government is a substantial purchaser of technology products and services – and highly knowledgeable thanks to entities like the National Institutes of Standards and Technology. Yes, I would like it to be a smaller purchaser of just about everything, but while it is a large market actor, it can drive standards and practices (like secure settings by default) into the marketplace that redound to the benefit of the cybersecurity ecology. The government could also form contracts that rely on contract liability – when products or services fail to serve the purposes for which they’re intended, including security – sellers would lose money. That would focus them as well.

A prominent report by a working group at the Center for Strategic and International Studies – co-chaired by one of my fellow panelists before the Science Committee last week, Scott Charney of Microsoft – argued strenuously for cybersecurity regulation.

But that begs the question of what regulation would say. Regulation is poorly suited to the process of discovering how to solve new problems amid changing technology and business practices.

There is some market failure in the cybersecurity area. Insecure technology can harm networks and users of networks, and these costs don’t accrue to the people selling or buying technology products. To get them to internalize these costs, I suggested tort liability rather than regulation. While courts discover the legal doctrines that unpack the myriad complex problems with litigating about technology products and services, they will force technology sellers and buyers to figure out how to prevent cyber-harms.

Government has a role in preventing people from harming each other, of course, and the common law could develop to meet “cyber” harms if it is left to its own devices. Tort litigation has been abused, and the established corporate sector prefers regulation because it is a stable environment for them, it helps them exclude competition, and they can use it to avoid liability for causing harm, making it easier to lag on security. Litigation isn’t preferable, and we don’t want lots of it – we just want the incentive structure tort liability creates.

As the distended policy issue it is, “cybersecurity” is ripe for shenanigans. Aggressive government agencies are looking to get regulatory authority over the Internet, computers, and software. Some of them wouldn’t mind getting to watch our Internet traffic, of course. Meanwhile, the corporate sector would like to use government to avoid the hot press of market competition, while shielding itself from liability for harms it may cause.

The government must secure its own assets and resources – that’s a given. Beyond that, not much good can come from government cybersecurity policy, except the occassional good, long blog post.

What We Have Here Is a Failure to Communicate

There are two parts to securing a country: making the country secure and making the country feel secure.

The head of U.S. Strategic Command, General Kevin Chilton, failed at the latter when he talked about security in a way that produced the following headline: U.S. General Reserves Right to Use Force, Even Nuclear, in Response to Cyber Attack.

As a theoretical matter, every element of military power should be on the table to respond to attacks. But the chance of responding to any “cyber attack” with military force is vanishingly small. To talk about responding with nuclear weapons simply helps spin our country into a security tizzy.

Politicians and military leaders should stop inflating the risk of cyber attack.

Awesome, Fearsome, Awesome - Or Maybe Silly

This video is making the rounds because Senator Jay Rockefeller (D-WV) muses in it that perhaps the Internet shouldn’t have been invented.

He immediately grants, “That’s a stupid thing to say” - perhaps for political reasons, or perhaps because he recognizes that the Internet makes us much better off despite every risk it carries and security flaw in it.

But he goes on to overstate cybersecurity risks excessively, breathlessly, and self-seriously. Not quite to the point of stupid - maybe we can call it “silly.”

The Department of Defense, he says, is “attacked” three million times a day. Well, yeah, but these “attacks” are mostly repetitious use of the same attack, mounted by “script kiddies” - unsophisticated know-nothings who get copies of others’ attacks and run them just to make trouble. The defense against this is to continually foreclose attacks and genres of attack as they develop, the way the human body develops antibodies to germs and viruses.

It’s important work, and it’s not always easy, but securing against attacks is an ongoing, stable practice in network management and a field of ongoing study in computer science. The attacks may continue to come, but it doesn’t really matter when the immunities and failsafes are in place and continuously being updated.

More important than this kind of threat inflation is the policy premise that the Internet should be treated as critical infrastructure because some important things happen on it.

Of cyber attack, Rockefeller says, “It’s an act … which can shut this country down. Shut down its electricity system, its banking system, shut down really anything we have to offer. It is an awesome problem.”

Umm, not really. Here’s Cato adjunct scholar Tim Lee, commenting on a report about the Estonian cyber attacks last year:

[S]ome mission-critical activities, including voting and banking, are carried out via the Internet in some places. But to the extent that that’s true, the lesson of the Estonian attacks isn’t that the Internet is “critical infrastructure” on par with electricity and water, but that it’s stupid to build “critical infrastructure” on top of the public Internet. There’s a reason that banks maintain dedicated infrastructure for financial transactions, that the power grid has a dedicated communications infrastructure, and that computer security experts are all but unanimous that Internet voting is a bad idea.

Tim has also noted that the Estonia attacks didn’t reach parliament, ministries, banks, and media - just their Web sites. Calm down, everyone.

But in the debate over raising the bridge or lowering the river, Rockefeller is choosing the policy that most enthuses and involves him: Get critical infrastructure onto the Internet and get the government into the cyber security business.

That’s a recipe for disaster. The right answer is to warn the operators of key infrastructure to keep critical functions off the Internet and let markets and tort law hold them responsible should they fail to maintain themselves operational.

I have written elsewhere about maintaining private responsibility for cyber security. My colleague Ben Friedman has written about who owns cyber security and more on the great cyber security freakout.