Tag: cybersecurity

Soviet-Style Cybersecurity Regulation

Reading over the cybersecurity legislative package recently introduced in the Senate is like reading a Soviet planning document. One of its fundamental flaws, if passed, would be its centralizing and deadening effect on society’s responses to the many and varied problems that are poorly captured by the word “cybersecurity.”

But I’m most struck by how, at every turn, this bill strains to release cybersecurity regulators—and their regulated entities—from the bonds of law. The Department of Homeland Security could commandeer private infrastructure into its regulatory regime simply by naming it “covered critical infrastructure.” DHS and a panel of courtesan institutes and councils would develop the regulatory regime outside of ordinary administrative processes. And—worst, perhaps—regulated entities would be insulated from ordinary legal liability if they were in compliance with government dictates. Regulatory compliance could start to usurp protection of the public as a corporate priority.

The bill retains privacy-threatening information-sharing language that I critiqued in no uncertain terms last week (Title VII), though the language has changed. (I have yet to analyze what effect those changes have.)

The news for Kremlin Beltway-watchers, of course, is that the Department of Homeland Security has won the upper-hand in the turf battle. (That’s the upshot of Title III of the bill.) It’s been a clever gambit of Washington’s to make the debate which agency should handle cybersecurity, rather than asking what the government’s role is and what it can actually contribute. Is it a small consolation that it’s a civilian security agency that gets to oversee Internet security for us, and not the military? None-of-the-above would have been the best choice of all.

Ah, but the government has access to secret information that nobody else does, doesn’t it? Don’t be so sure. Secrecy is a claim to authority that I reject. Many swoon to secrecy, assuming the government has 1) special information that is 2) actually helpful. I interpret secrecy as a failure to put facts into evidence. My assumption is the one consistent with accountable government and constitutional liberty. But we’re doing Soviet-style cybersecurity here, so let’s proceed.

Title I is the part of the bill that Sovietizes cybersecurity. It brings a welter of government agencies, boards, and institutes together with private-sector owners of government-deemed “critical infrastructure” to do sector-by-sector “cyber risk assessments” and to produce “cybersecurity performance requirements.” Companies would be penalized if they failed to certify to the government annually that they have “developed and effectively implemented security measures sufficient to satisfy the risk-based security performance requirements.” Twenty-first century paperwork violations. But in exchange, critical infrastructure owners would be insulated from liability (sec. 105(e))—a neat corporatist trade-off.

How poorly tuned these security-by-committee processes are. In just 90 days, the bill requires a “top-level assessment” of “cybersecurity threats, vulnerabilities, risks, and probability of a catastrophic incident across all critical infrastructure sectors” in order to guide the allocation of resources. That’s going to produce risk assessment with all the quality of a student term paper written overnight.

Though central planning is not the way to do cybersecurity at all, a serious risk assessment would take at least a year and it would be treated explicitly in the bill as a “final agency action” for purposes of judicial review under the Administrative Procedure Act. The likelihood of court review and reversal is the only thing that might cause this risk assessment to actually use a sound methodology. As it is, watch for it to be a political document that rehashes tired cyberslogans and anecdotes.

The same administrative rigor should be applied to other regulatory actions created by the bill, such as designations of “covered critical infrastructure,” for example. Amazingly, the bill requires no administrative law regularity (i.e., notice-and-comment rulemaking, agency methodology and decisions subject to court review) when the government designates private businesses as “covered critical infrastructure” (sec. 103), but if an owner of private infrastructure wants to contest those decisions, it does require administrative niceties (sec. 103(c)). In other words, the government can commandeer private businesses at whim. Getting your business out of the government’s maw will require leaden processes.

Hopefully, our courts will recognize that a “final agency action” has occurred at least when the Department of Homeland Security subjects privately owned infrastructure to special regulation, if not when it devises whatever plan or methodology to do so.

The same administrative defects exist in the section establishing “risk-based cybersecurity performance requirements.” The bill calls for the DHS and its courtesans to come up with these regulations without reference to administrative process (sec. 104). That’s what they are, though: regulations. Calling them “performance requirements” doesn’t make a damn bit of difference. When it came time to applying these regulatory requirements to regulated entities (sec. 105), then the DHS would “promulgate regulations.”

I can’t know what the authors of the bill are trying to achieve by bifurcating the content of the regulations with the application of the regulations to the private sector, but it seems intended to insulate the regulations from administrative procedures. It’s like the government saying that the menu is going to be made up outside of law—just the force-feeding is subject to administrative procedure. Hopefully, that won’t wash in the courts either.

This matters not only because the rule of law is an important abstraction. Methodical risk analsysis and methodical application of the law will tend to limit what things are deemed “covered critical infrastructure” and what the regulations on that infrastrtucture are. It will limit the number of things that fall within the privacy-threatening information sharing portion of the bill, too.

Outside of regular order, cybersecurity will tend to be flailing, spasmodic, political, and threatening to privacy and liberty. We should not want a system of Soviet-style regulatory dictates for that reason—and because it is unlikley to produce better cybersecurity.

The better systems for discovering and responding to cybersecurity risks are already in place. One is the system of profit and loss that companies enjoy or suffer when they succeed or fail to secure their assets. Another is common law liability, where failure to prevent harms to others produces legal liability and damage awards.

The resistance to regular legal processes in this bill is part and parcel of the stampede to regulate in the name of cybersecurity. It’s a move toward centralized regulatory command-and-control over large swaths of the economy through “cybersecurity.”

Should a Congress that Doesn’t Understand Math Regulate Cybersecurity?

There’s a delicious irony in some of the testimony on cybersecurity that the Senate Homeland Security and Governmental Affairs Committee will hear today (starting at 2:30 Eastern — it’s unclear from the hearing’s page whether it will be live-streamed). Former National Security Agency general counsel Stewart Baker flubs a basic mathematical concept.

If Congress credits his testimony, is it really equipped to regulate the Internet in the name of “cybersecurity”?

Baker’s written testimony (not yet posted) says, stirringly, “Our vulnerabilities, and their consequences, are growing at an exponential rate.” He’s stirring cake batter, though. Here’s why.

Exponential growth occurs when the growth rate of the value of a mathematical function is proportional to the function’s current value. It’s nicely illustrated with rabbits. If in week one you have two rabbits, and in week two you have four, you can expect eight rabbits in week three and sixteen in week four. That’s exponential growth. The number of rabbits each week dictates the number of rabbits the following week. By the end of the year, the earth will be covered in rabbits. (The Internet provides us an exponents calculator, you see. Try calculating 2^52.)

The vulnerabilities of computers, networks, and data may be growing. But such vulnerabilities are not a function of the number of transistors that can be placed on an integrated circuit. Baker is riffing on Moore’s Law, which describes long-term exponential growth in computing power.

Instead, vulnerabilities will generally be a function of the number of implementations of information technology. A new protocol may open one or more vulnerabilities. A new piece of software may have one or more vulnerabilities. A new chip design may have one or more vulnerabilities. Interactions between various protocols and pieces of hardware and software may create vulnerabilities. And so on. At worst, in some fields of information technology, there might be something like cubic growth in vulnerabilities, but it’s doubtful that such a trend could last.

Why? Because vulnerabilities are also regularly closing. Protocols get ironed out. Software bugs get patched. Bad chip designs get fixed.

There’s another dimension along which vulnerabilities are also probably growing. This would be a function of the “quantity” of information technology out there. If there are 10,000 instances of a given piece of software in use out there with a vulnerability, that’s 10,000 vulnerabilities. If there are 100,000 instances of it, that’s 10 times more vulnerabilities—but that’s still linear growth, not exponential growth. The number of vulnerabilities grows in direct proportion to the number of instances of the technology.

Ignore the downward pressure on vulnerabilities, though, and put growth in the number of vulnerabilities together with the growth in the propogation of vulnerabilities. Don’t you have exponential growth? No. You still have linear growth. The growth in vulnerability from new implementations of information technology and new instances of that technology multiply. Across technologies, they sum. They don’t act as exponents to one another.

Baker uses “vulnerability” and “threat” interchangeably, but careful thinkers about risk wouldn’t do this, I don’t think. Vulnerability is the existence of weakness. Threat is someone or something animated to exploit it (a “hazard” if that thing is inanimate). Vulnerabilities don’t really matter, in fact, if there isn’t anyone to exploit them. Do you worry about the number of hairs on your body being a source of pain? No, because nobody is going to come along and pluck them all. You need to have a threat vector, or vulnerability is just idle worry.

Now, threats can multiply quickly online. When exploits to some vulnerabilities are devised, their creators can propogate them quickly to others, such as “script kiddies” who will run such exploits everywhere they can. Hence, the significance of the “zero-day threat” and the importance of patching software promptly.

As to consequence, Baker cites examples of recent hacks on HBGary, RSA, Verisign, and DigiNotar, as well as weakness in industrial control systems. This says nothing about growth rates, much less how the number of hacks in the last year forms the basis for more in the next. If some hacks allow other hacks to be implemented, that, again, would be a multiplier, not an exponent. (Generally, these most worrisome hacks can’t be executed by script kiddes, so they are not soaring in numerosity. You know what happens to consequential hacks that do soar in numerosity? They’re foreclosed by patches.)

Vulnerability and threat analyses are inputs into determinations about the likelihood of bad things happening. The next step is to multiply that likelihood against consequence. The product is a sense of how important a given risk is. That’s risk assessment.

But Baker isn’t terribly interested in acute risk management. During his years as Assistant Secretary for Policy at the Department of Homeland Security, the agency didn’t do the risk management work that would validate or invalidate the strip-search machine/intrusive pat-down policy (and it still hasn’t, despite a court order). The bill he’s testifying in support of wouldn’t manage cybersecurity risks terribly well, either, for reasons I’ll articulate in a forthcoming post.

Do your representatives in Congress get the math involved here? Do they know the difference between exponential growth and linear growth? Do they “get” risk management? Chances are they don’t. They may even parrot the “statistic” that Baker is putting forth. How well equipped do you suppose a body like that is for telling you how to do your cybersecurity?

The Senate’s SOPA Counterattack?: Cybersecurity the Undoing of Privacy

The Daily Caller reports that Senator Harry Reid (D-NV) is planning another effort at Internet regulation—right on the heels of the SOPA/PIPA debacle. The article seems calculated to insinuate that a follow-on to SOPA/PIPA might slip into cybersecurity legislation the Senate plans to take up. Whether that’s in the works or not, I’ll detail here the privacy threats in cybersecurity language being circulated on the Hill.

A Senate draft currently making the rounds is called the “Cybersecurity Information Sharing Act of 2012.” It sets up “cybersecurity exchanges” at which government and corporate entities would share threat information and solutions.

Sharing of information does not require federal approval or planning, of course. Information sharing happens all the time according to market processes. But “information sharing” is the solution Congress has seized upon, so federal information sharing programs we will have. Think of all this as a “see something, say something” campaign for corporate computer security people. Or perhaps “e-fusion centers.”

Reading over the draft, I was struck by sweeping language purporting to create “affirmative authority to monitor and defend against cybersecurity threats.” To understand the strangeness of these words, we must start at the beginning:

We live in a free country where all that is not forbidden is allowed. There is no need in such a country for “affirmative” authority to act. So what does this section do as it in purports to permit private and governmental entities to monitor their information systems, operate active defenses, and such? It sweeps aside nearly all other laws controlling them.

“Consistent with the Constitution of the United States and notwithstanding and other provision of law,” it says (emphasis added), entities may act to preserve the security of their systems. This means that the only law controlling their actions would be the Constitution.

It’s nice that the Constitution would apply</sarcasm>, but the obligations in the Privacy Act of 1974 would not. The Electronic Communications Privacy Act would be void. Even the requirements of the E-Government Act of 2002, such as privacy impact assessments, would be swept aside.

The Constitution doesn’t constrain private actors, of course. This language would immunize them from liability under any and all regulation and under state or common law. Private actors would not be subject to suit for breaching contractual promises of confidentiality. They would not be liable for violating the privacy torts. Anything goes so long as one can make a claim to defending “information systems,” a term that refers to anything having to do with computers.

Elsewhere, the bill creates an equally sweeping immunity against law-breaking so long as the law-breaking provides information to a “cybersecurity exchange.” This is a breath-taking exemption from the civil and criminal laws that protect privacy, among other things.

(1) IN GENERAL.—No civil or criminal cause of action shall lie or be maintained in any Federal or State court against any non-Federal governmental or private entity, or any officer, employee, or agent of such an entity, and any such action shall be dismissed promptly, for the disclosure of a cybersecurity threat indicator to—
(A) a cybersecurity exchange under subsection (a)(1); or
(B) a private entity under subsection, (b)(1), provided the cybersecurity threat indicator is promptly shared with a cybersecurity exchange.

In addition to this immunity from suit, the bill creates an equally sweeping “good faith” defense:

Where a civil or criminal cause of action is not barred under paragraph (1), a good faith reliance by any person on a legislative authorization, a statutory authorization, or a good faith determination that this Act permitted the conduct complained of, is a complete defense against any civil or criminal action brought under this Act or any other law.

Good faith is a question of fact, and a corporate security official could argue successfully that she acted in good faith if a government official told her to turn over private data. This language allows the corporate sector to abandon its responsibility to follow the law in favor of following government edicts. We’ve seen attacks on the rule of law like this before.

A House Homeland Security subcommittee marked up a counterpart to this bill last week. It does not have similar language that I could find.

In 2009, I testified in the House Science Committee on cybersecurity, skeptical of the government’s ability to tackle cybersecurity but cognizant that the government must secure its own systems. “Cybersecurity exchanges” are a blind stab at addressing the many challenges in securing computers, networks, and data, and I think they are unnecessary at best. According to current plans, cybersecurity exchanges come at a devastating cost to our online privacy.

Congress seems poised once again to violate the rule from the SOPA/PIPA disaster: “First, do no harm to the Internet.”

The New SOPA: Now With Slightly Less Awfulness!

On Thursday, the House Judiciary Committee is slated to take up the misleadingly named Stop Online Piracy Act, an Internet censorship bill that will do little to actually stop piracy. In response to an outpouring of opposition from cybersecurity professionals, First Amendment scholars, technology entrepreneurs, and ordinary Internet users, the bill’s sponsors have cooked up an amended version that trims or softens a few of the most egregious provisions of the original proposal, bringing it closer to its Senate counterpart, PROTECT-IP. But the fundamental problem with SOPA has never been these details; it’s the core idea. The core idea is still to create an Internet blacklist, which means everything I say in this video still holds true:

Let’s review the main changes. Three new clarifying clauses have been added up front: the first two make clear that SOPA is not meant to create an affirmative obligation for site owners to monitor user content (good!) or mandate the implementation of technologies as a condition of compliance with the law (also good!). But the underlying incentives created by the statute push strongly in that direction whether or not it’s a formal requirement: What else do we imagine sites threatened under this law because of user-uploaded content or links will do to escape liability? A third clause says the bill shouldn’t be construed in a way that would impair the security or integrity of the network—which is a bit like slapping a label on a cake stipulating that it shouldn’t be construed to make you fat. These are all nice sentiments, but they remind me of the old philosophers’ joke: “You’ve obviously misinterpreted my theory; I didn’t intend for it to have any counterexamples!”

The big changes in the section establishing court-ordered blocking of supposed “rogue” sites appear to be intended to respond to the objections of cybersecurity professionals and network engineers, who pointed out that requiring falsification of Domain Name System records to redirect users from banned domains would interfere with a major government-supported initiative to secure the Internet against such hijacking. The updated language explicitly disavows the idea of redirection, removes a hard five-day deadline for compliance, and (crucially) says that any DNS operator (like your ISP) has fully satisfied its obligations under the statute if it simply fails to respond to DNS queries for blacklisted sites.

This is bad for transparency, in both the engineering and democratic senses of that term, insofar as it makes a government block indistinguishable from a technical failure, but it does, in a sense, address the direct conflict with DNSSEC. But as network engineers point out, a well-designed application implementing DNSSEC isn’t just going to give up when it doesn’t get a valid, cryptographically signed reply: it’s going to try other DNS servers (including servers outside US jurisdiction) until it finds one that answers.

There are two possibilities here. The first is that application designers don’t design their software properly to implement DNSSEC for fear of liability under the statute’s anti-circumvention provisions, which would be a Very Bad Thing. The second is that they’re assured they won’t be held liable for good design, in which case this whole elaborate censorship process—which was never going to be particularly effective against people who actually want to find pirated content—becomes a truly farcical pantomime, in which nobody running reasonably up-to-date clients even notices the nominal “blocking,” beyond a few seconds delay in resolving the “blocked” site. Now, if we’ve got to have an Internet censorship law, a completely impotent one is surely the best kind, but it becomes a bit mysterious what the point of all this is, beyond providing civil libertarians with a chuckle at the vast amount of money Hollywood has wasted ramming this thing through.

The other big change is to the private right of action, which previously would have allowed any copyright holder to unilaterally compel payment processors and ad networks to cut off sites that it merely accuses of infringement, or enabling infringement, or (in a baffling specimen of tortured language) taking “deliberate actions to avoid confirming a high probability” that the site would be used for infringement. That last little hate crime against English is mercifully absent from the revised SOPA, and it makes clear that only foreign sites are covered, and a judge is now required to actually issue an order before intermediaries are obligated to sever ties.

Which ultimately goes to show that the original proposal was so profoundly wretched that you can improve it a great deal, and still have a very bad idea. This is still, as many legal scholars have correctly observed, censorship by slightly circuitous economic means. The involvement of a judge should (knock on wood) weed out the most obviously frivolous complaints, but it still makes it far too easy for U.S. corporations to effectively destroy foreign Internet sites based on a one-sided proceeding in U.S. courts.

These changes are somewhat heartening insofar as they evince some legislative interest in addressing the legitimate concerns that have been raised thus far. But the problem with SOPA and PROTECT-IP isn’t that they need to be tweaked in order to get the details of an Internet censorship system right. There is no “right” way to do Internet censorship, and the best version of a bad idea remains a bad idea.

The Lives of Others 2.0

Tattoo it on your forearm—or better, that of your favorite legislator—for easy reference in the next debate over wiretapping: government surveillance is a security breach—by definition and by design. The latest evidence of this comes from Germany, where there’s growing furor over a hacker group’s allegations that government-designed Trojan Horse spyware is not only insecure, but packed with functions that exceed the limits of German law:

On Saturday, the CCC (the hacker group) announced that it had been given hard drives containing “state spying software,” which had allegedly been used by German investigators to carry out surveillance of Internet communication. The organization had analyzed the software and found it to be full of defects. They also found that it transmitted information via a server located in the United States. As well as its surveillance functions, it could be used to plant files on an individual’s computer. It was also not sufficiently protected, so that third parties with the necessary technical skills could hijack the Trojan horse’s functions for their own ends. The software possibly violated German law, the organization said.

Back in 2004–2005, software designed to facilitate police wiretaps was exploited by unknown parties to intercept the communications of dozens of top political officials in Greece. And just last year, we saw an attack on Google’s e-mail system targeting Chinese dissidents, which some sources have claimed was carried out by compromising a backend interface designed for law enforcement.

Any communications architecture that is designed to facilitate outsider access to communications—for all the most noble reasons—is necessarily more vulnerable to malicious interception as a result. That’s why technologists have looked with justified skepticism on periodic calls from intelligence agencies to redesign data networks for their convenience. At least in this case, the vulnerability is limited to specific target computers on which the malware has been installed. Increasingly, governments want their spyware installed at the switches—making for a more attractive target, and more catastrophic harm in the event of a successful attack.

Friday Links

  • “PBS used to ask, ‘If not PBS, then who?’ The answer now is: HBO, Bravo, Discovery, History, History International, Science, Planet Green, Sundance, Military, C-SPAN 1/2/3 and many more.”
  • “The fiscal problem that is destroying U.S. economic confidence is not the fiscal balance, however. It is the level of government expenditures relative to GDP.”
  • “The Pentagon’s first cyber security strategy… builds on national hysteria about threats to cybersecurity, the latest bogeyman to justify our bloated national security state.”
  • How ‘secure’ do our homes remain if police, armed with no warrant, can pound on doors at will and, on hearing sounds indicative of things moving, forcibly enter and search for evidence of unlawful activity?”
  • National debt is driving the U.S. toward a double-dip recession

The Internet Kill-Switch Debate

Experienced debaters know that the framing of an issue often determines the outcome of the contest. Always watch the slant of the ground that debaters stand on.

The Internet kill-switch debate is instructive. Last week, Senators Lieberman (I-CT), Collins (R-ME) and Carper (D-DE) introduced a newly modified bill that seeks to give the government authority to seize power over the Internet or parts of it. The old version was widely panned.

In a statement about the new bill, they denied that it should be called a “kill switch,” of course–that language isn’t good for their cause after Egypt’s ousted dictator Hosni Mubarak illustrated what such power means. They also inserted a section called the “Internet Freedom Act.” It’s George Orwell with a clown nose, a comically ham-handed attempt to make it seem like the bill is not a government power-grab.

But they also said this: “The emergency measures in our bill apply in a precise and targeted way only to our most critical infrastructure.”

Accordingly, much of the reportage and commentary in this piece by Declan McCullagh explores whether the powers are indeed precisely targeted.

These are important and substantive points, right? Well, only if you’ve already conceded some more important ones, such as:

1) What authority does the government have to seize, or plan to seize, private assets? Such authority would be highly debatable under any of the constitutional powers kill-switchers might claim. Indeed, the constitution protects against, or at least severely limits, takings of private property in the Fifth Amendment.


2) Would it be a good idea to have the government seize control of the Internet, or parts of it, under some emergency situation? A government attack on our private communications infrastructure would almost certainly undercut the reliability and security of our networks, computers and data.

The proponents of the Internet kill-switch have not met their burden on either of these fundamental points. Thus, the question of tailoring is irrelevant.

I managed to get in a word to this effect in the story linked above. “How does this make cybersecurity better? They have no answer,” I said. They really don’t.

No amount of tailoring can make a bad idea a good one. The Internet kill-switch debate is not about the precision or care with which such a policy might be designed or implemented. It’s about the galling claim on the part of Senators Lieberman, Collins and Carper that the U.S. government can seize private assets at will or whim.