Tag: cybersecurity

Should a Congress that Doesn’t Understand Math Regulate Cybersecurity?

There’s a delicious irony in some of the testimony on cybersecurity that the Senate Homeland Security and Governmental Affairs Committee will hear today (starting at 2:30 Eastern — it’s unclear from the hearing’s page whether it will be live-streamed). Former National Security Agency general counsel Stewart Baker flubs a basic mathematical concept.

If Congress credits his testimony, is it really equipped to regulate the Internet in the name of “cybersecurity”?

Baker’s written testimony (not yet posted) says, stirringly, “Our vulnerabilities, and their consequences, are growing at an exponential rate.” He’s stirring cake batter, though. Here’s why.

Exponential growth occurs when the growth rate of the value of a mathematical function is proportional to the function’s current value. It’s nicely illustrated with rabbits. If in week one you have two rabbits, and in week two you have four, you can expect eight rabbits in week three and sixteen in week four. That’s exponential growth. The number of rabbits each week dictates the number of rabbits the following week. By the end of the year, the earth will be covered in rabbits. (The Internet provides us an exponents calculator, you see. Try calculating 2^52.)

The vulnerabilities of computers, networks, and data may be growing. But such vulnerabilities are not a function of the number of transistors that can be placed on an integrated circuit. Baker is riffing on Moore’s Law, which describes long-term exponential growth in computing power.

Instead, vulnerabilities will generally be a function of the number of implementations of information technology. A new protocol may open one or more vulnerabilities. A new piece of software may have one or more vulnerabilities. A new chip design may have one or more vulnerabilities. Interactions between various protocols and pieces of hardware and software may create vulnerabilities. And so on. At worst, in some fields of information technology, there might be something like cubic growth in vulnerabilities, but it’s doubtful that such a trend could last.

Why? Because vulnerabilities are also regularly closing. Protocols get ironed out. Software bugs get patched. Bad chip designs get fixed.

There’s another dimension along which vulnerabilities are also probably growing. This would be a function of the “quantity” of information technology out there. If there are 10,000 instances of a given piece of software in use out there with a vulnerability, that’s 10,000 vulnerabilities. If there are 100,000 instances of it, that’s 10 times more vulnerabilities—but that’s still linear growth, not exponential growth. The number of vulnerabilities grows in direct proportion to the number of instances of the technology.

Ignore the downward pressure on vulnerabilities, though, and put growth in the number of vulnerabilities together with the growth in the propogation of vulnerabilities. Don’t you have exponential growth? No. You still have linear growth. The growth in vulnerability from new implementations of information technology and new instances of that technology multiply. Across technologies, they sum. They don’t act as exponents to one another.

Baker uses “vulnerability” and “threat” interchangeably, but careful thinkers about risk wouldn’t do this, I don’t think. Vulnerability is the existence of weakness. Threat is someone or something animated to exploit it (a “hazard” if that thing is inanimate). Vulnerabilities don’t really matter, in fact, if there isn’t anyone to exploit them. Do you worry about the number of hairs on your body being a source of pain? No, because nobody is going to come along and pluck them all. You need to have a threat vector, or vulnerability is just idle worry.

Now, threats can multiply quickly online. When exploits to some vulnerabilities are devised, their creators can propogate them quickly to others, such as “script kiddies” who will run such exploits everywhere they can. Hence, the significance of the “zero-day threat” and the importance of patching software promptly.

As to consequence, Baker cites examples of recent hacks on HBGary, RSA, Verisign, and DigiNotar, as well as weakness in industrial control systems. This says nothing about growth rates, much less how the number of hacks in the last year forms the basis for more in the next. If some hacks allow other hacks to be implemented, that, again, would be a multiplier, not an exponent. (Generally, these most worrisome hacks can’t be executed by script kiddes, so they are not soaring in numerosity. You know what happens to consequential hacks that do soar in numerosity? They’re foreclosed by patches.)

Vulnerability and threat analyses are inputs into determinations about the likelihood of bad things happening. The next step is to multiply that likelihood against consequence. The product is a sense of how important a given risk is. That’s risk assessment.

But Baker isn’t terribly interested in acute risk management. During his years as Assistant Secretary for Policy at the Department of Homeland Security, the agency didn’t do the risk management work that would validate or invalidate the strip-search machine/intrusive pat-down policy (and it still hasn’t, despite a court order). The bill he’s testifying in support of wouldn’t manage cybersecurity risks terribly well, either, for reasons I’ll articulate in a forthcoming post.

Do your representatives in Congress get the math involved here? Do they know the difference between exponential growth and linear growth? Do they “get” risk management? Chances are they don’t. They may even parrot the “statistic” that Baker is putting forth. How well equipped do you suppose a body like that is for telling you how to do your cybersecurity?

The Senate’s SOPA Counterattack?: Cybersecurity the Undoing of Privacy

The Daily Caller reports that Senator Harry Reid (D-NV) is planning another effort at Internet regulation—right on the heels of the SOPA/PIPA debacle. The article seems calculated to insinuate that a follow-on to SOPA/PIPA might slip into cybersecurity legislation the Senate plans to take up. Whether that’s in the works or not, I’ll detail here the privacy threats in cybersecurity language being circulated on the Hill.

A Senate draft currently making the rounds is called the “Cybersecurity Information Sharing Act of 2012.” It sets up “cybersecurity exchanges” at which government and corporate entities would share threat information and solutions.

Sharing of information does not require federal approval or planning, of course. Information sharing happens all the time according to market processes. But “information sharing” is the solution Congress has seized upon, so federal information sharing programs we will have. Think of all this as a “see something, say something” campaign for corporate computer security people. Or perhaps “e-fusion centers.”

Reading over the draft, I was struck by sweeping language purporting to create “affirmative authority to monitor and defend against cybersecurity threats.” To understand the strangeness of these words, we must start at the beginning:

We live in a free country where all that is not forbidden is allowed. There is no need in such a country for “affirmative” authority to act. So what does this section do as it in purports to permit private and governmental entities to monitor their information systems, operate active defenses, and such? It sweeps aside nearly all other laws controlling them.

“Consistent with the Constitution of the United States and notwithstanding and other provision of law,” it says (emphasis added), entities may act to preserve the security of their systems. This means that the only law controlling their actions would be the Constitution.

It’s nice that the Constitution would apply</sarcasm>, but the obligations in the Privacy Act of 1974 would not. The Electronic Communications Privacy Act would be void. Even the requirements of the E-Government Act of 2002, such as privacy impact assessments, would be swept aside.

The Constitution doesn’t constrain private actors, of course. This language would immunize them from liability under any and all regulation and under state or common law. Private actors would not be subject to suit for breaching contractual promises of confidentiality. They would not be liable for violating the privacy torts. Anything goes so long as one can make a claim to defending “information systems,” a term that refers to anything having to do with computers.

Elsewhere, the bill creates an equally sweeping immunity against law-breaking so long as the law-breaking provides information to a “cybersecurity exchange.” This is a breath-taking exemption from the civil and criminal laws that protect privacy, among other things.

(1) IN GENERAL.—No civil or criminal cause of action shall lie or be maintained in any Federal or State court against any non-Federal governmental or private entity, or any officer, employee, or agent of such an entity, and any such action shall be dismissed promptly, for the disclosure of a cybersecurity threat indicator to—
(A) a cybersecurity exchange under subsection (a)(1); or
(B) a private entity under subsection, (b)(1), provided the cybersecurity threat indicator is promptly shared with a cybersecurity exchange.

In addition to this immunity from suit, the bill creates an equally sweeping “good faith” defense:

Where a civil or criminal cause of action is not barred under paragraph (1), a good faith reliance by any person on a legislative authorization, a statutory authorization, or a good faith determination that this Act permitted the conduct complained of, is a complete defense against any civil or criminal action brought under this Act or any other law.

Good faith is a question of fact, and a corporate security official could argue successfully that she acted in good faith if a government official told her to turn over private data. This language allows the corporate sector to abandon its responsibility to follow the law in favor of following government edicts. We’ve seen attacks on the rule of law like this before.

A House Homeland Security subcommittee marked up a counterpart to this bill last week. It does not have similar language that I could find.

In 2009, I testified in the House Science Committee on cybersecurity, skeptical of the government’s ability to tackle cybersecurity but cognizant that the government must secure its own systems. “Cybersecurity exchanges” are a blind stab at addressing the many challenges in securing computers, networks, and data, and I think they are unnecessary at best. According to current plans, cybersecurity exchanges come at a devastating cost to our online privacy.

Congress seems poised once again to violate the rule from the SOPA/PIPA disaster: “First, do no harm to the Internet.”

The New SOPA: Now With Slightly Less Awfulness!

On Thursday, the House Judiciary Committee is slated to take up the misleadingly named Stop Online Piracy Act, an Internet censorship bill that will do little to actually stop piracy. In response to an outpouring of opposition from cybersecurity professionals, First Amendment scholars, technology entrepreneurs, and ordinary Internet users, the bill’s sponsors have cooked up an amended version that trims or softens a few of the most egregious provisions of the original proposal, bringing it closer to its Senate counterpart, PROTECT-IP. But the fundamental problem with SOPA has never been these details; it’s the core idea. The core idea is still to create an Internet blacklist, which means everything I say in this video still holds true:



Let’s review the main changes. Three new clarifying clauses have been added up front: the first two make clear that SOPA is not meant to create an affirmative obligation for site owners to monitor user content (good!) or mandate the implementation of technologies as a condition of compliance with the law (also good!). But the underlying incentives created by the statute push strongly in that direction whether or not it’s a formal requirement: What else do we imagine sites threatened under this law because of user-uploaded content or links will do to escape liability? A third clause says the bill shouldn’t be construed in a way that would impair the security or integrity of the network—which is a bit like slapping a label on a cake stipulating that it shouldn’t be construed to make you fat. These are all nice sentiments, but they remind me of the old philosophers’ joke: “You’ve obviously misinterpreted my theory; I didn’t intend for it to have any counterexamples!”

The big changes in the section establishing court-ordered blocking of supposed “rogue” sites appear to be intended to respond to the objections of cybersecurity professionals and network engineers, who pointed out that requiring falsification of Domain Name System records to redirect users from banned domains would interfere with a major government-supported initiative to secure the Internet against such hijacking. The updated language explicitly disavows the idea of redirection, removes a hard five-day deadline for compliance, and (crucially) says that any DNS operator (like your ISP) has fully satisfied its obligations under the statute if it simply fails to respond to DNS queries for blacklisted sites.

This is bad for transparency, in both the engineering and democratic senses of that term, insofar as it makes a government block indistinguishable from a technical failure, but it does, in a sense, address the direct conflict with DNSSEC. But as network engineers point out, a well-designed application implementing DNSSEC isn’t just going to give up when it doesn’t get a valid, cryptographically signed reply: it’s going to try other DNS servers (including servers outside US jurisdiction) until it finds one that answers.

There are two possibilities here. The first is that application designers don’t design their software properly to implement DNSSEC for fear of liability under the statute’s anti-circumvention provisions, which would be a Very Bad Thing. The second is that they’re assured they won’t be held liable for good design, in which case this whole elaborate censorship process—which was never going to be particularly effective against people who actually want to find pirated content—becomes a truly farcical pantomime, in which nobody running reasonably up-to-date clients even notices the nominal “blocking,” beyond a few seconds delay in resolving the “blocked” site. Now, if we’ve got to have an Internet censorship law, a completely impotent one is surely the best kind, but it becomes a bit mysterious what the point of all this is, beyond providing civil libertarians with a chuckle at the vast amount of money Hollywood has wasted ramming this thing through.

The other big change is to the private right of action, which previously would have allowed any copyright holder to unilaterally compel payment processors and ad networks to cut off sites that it merely accuses of infringement, or enabling infringement, or (in a baffling specimen of tortured language) taking “deliberate actions to avoid confirming a high probability” that the site would be used for infringement. That last little hate crime against English is mercifully absent from the revised SOPA, and it makes clear that only foreign sites are covered, and a judge is now required to actually issue an order before intermediaries are obligated to sever ties.

Which ultimately goes to show that the original proposal was so profoundly wretched that you can improve it a great deal, and still have a very bad idea. This is still, as many legal scholars have correctly observed, censorship by slightly circuitous economic means. The involvement of a judge should (knock on wood) weed out the most obviously frivolous complaints, but it still makes it far too easy for U.S. corporations to effectively destroy foreign Internet sites based on a one-sided proceeding in U.S. courts.

These changes are somewhat heartening insofar as they evince some legislative interest in addressing the legitimate concerns that have been raised thus far. But the problem with SOPA and PROTECT-IP isn’t that they need to be tweaked in order to get the details of an Internet censorship system right. There is no “right” way to do Internet censorship, and the best version of a bad idea remains a bad idea.

The Lives of Others 2.0

Tattoo it on your forearm—or better, that of your favorite legislator—for easy reference in the next debate over wiretapping: government surveillance is a security breach—by definition and by design. The latest evidence of this comes from Germany, where there’s growing furor over a hacker group’s allegations that government-designed Trojan Horse spyware is not only insecure, but packed with functions that exceed the limits of German law:

On Saturday, the CCC (the hacker group) announced that it had been given hard drives containing “state spying software,” which had allegedly been used by German investigators to carry out surveillance of Internet communication. The organization had analyzed the software and found it to be full of defects. They also found that it transmitted information via a server located in the United States. As well as its surveillance functions, it could be used to plant files on an individual’s computer. It was also not sufficiently protected, so that third parties with the necessary technical skills could hijack the Trojan horse’s functions for their own ends. The software possibly violated German law, the organization said.

Back in 2004–2005, software designed to facilitate police wiretaps was exploited by unknown parties to intercept the communications of dozens of top political officials in Greece. And just last year, we saw an attack on Google’s e-mail system targeting Chinese dissidents, which some sources have claimed was carried out by compromising a backend interface designed for law enforcement.

Any communications architecture that is designed to facilitate outsider access to communications—for all the most noble reasons—is necessarily more vulnerable to malicious interception as a result. That’s why technologists have looked with justified skepticism on periodic calls from intelligence agencies to redesign data networks for their convenience. At least in this case, the vulnerability is limited to specific target computers on which the malware has been installed. Increasingly, governments want their spyware installed at the switches—making for a more attractive target, and more catastrophic harm in the event of a successful attack.

Friday Links

  • “PBS used to ask, ‘If not PBS, then who?’ The answer now is: HBO, Bravo, Discovery, History, History International, Science, Planet Green, Sundance, Military, C-SPAN 1/2/3 and many more.”
  • “The fiscal problem that is destroying U.S. economic confidence is not the fiscal balance, however. It is the level of government expenditures relative to GDP.”
  • “The Pentagon’s first cyber security strategy… builds on national hysteria about threats to cybersecurity, the latest bogeyman to justify our bloated national security state.”
  • How ‘secure’ do our homes remain if police, armed with no warrant, can pound on doors at will and, on hearing sounds indicative of things moving, forcibly enter and search for evidence of unlawful activity?”
  • National debt is driving the U.S. toward a double-dip recession

The Internet Kill-Switch Debate

Experienced debaters know that the framing of an issue often determines the outcome of the contest. Always watch the slant of the ground that debaters stand on.

The Internet kill-switch debate is instructive. Last week, Senators Lieberman (I-CT), Collins (R-ME) and Carper (D-DE) introduced a newly modified bill that seeks to give the government authority to seize power over the Internet or parts of it. The old version was widely panned.

In a statement about the new bill, they denied that it should be called a “kill switch,” of course–that language isn’t good for their cause after Egypt’s ousted dictator Hosni Mubarak illustrated what such power means. They also inserted a section called the “Internet Freedom Act.” It’s George Orwell with a clown nose, a comically ham-handed attempt to make it seem like the bill is not a government power-grab.

But they also said this: “The emergency measures in our bill apply in a precise and targeted way only to our most critical infrastructure.”

Accordingly, much of the reportage and commentary in this piece by Declan McCullagh explores whether the powers are indeed precisely targeted.

These are important and substantive points, right? Well, only if you’ve already conceded some more important ones, such as:

1) What authority does the government have to seize, or plan to seize, private assets? Such authority would be highly debatable under any of the constitutional powers kill-switchers might claim. Indeed, the constitution protects against, or at least severely limits, takings of private property in the Fifth Amendment.

and

2) Would it be a good idea to have the government seize control of the Internet, or parts of it, under some emergency situation? A government attack on our private communications infrastructure would almost certainly undercut the reliability and security of our networks, computers and data.

The proponents of the Internet kill-switch have not met their burden on either of these fundamental points. Thus, the question of tailoring is irrelevant.

I managed to get in a word to this effect in the story linked above. “How does this make cybersecurity better? They have no answer,” I said. They really don’t.

No amount of tailoring can make a bad idea a good one. The Internet kill-switch debate is not about the precision or care with which such a policy might be designed or implemented. It’s about the galling claim on the part of Senators Lieberman, Collins and Carper that the U.S. government can seize private assets at will or whim.

Cyber-Intrigue and Miscalculation

If you haven’t been following the intrigue around Wikileaks and the security companies hoping to help the government fight it, this stuff is not to be missed. Recommended:

The latter story links to a document purporting to show that a government contractor called Palantir Technologies suggested unnamed ways that Glenn Greenwald (author of this excellent Cato study) might be made to choose “professional preservation” over his sympathetic reporting about Wikileaks. A later page talks of “proactive strategies” including: “Use social media to profile and identify risky behavior of employees.”

Wikileaks has no employees. I take this to mean that the personal lives of Wikileaks supporters and sympathizers would be used to undercut its public credibility. Because Julian Assange hasn’t done enough…

While we’re on credibility: This may well be Wikileaks’ rehabilitation. Wikileaks erred badly by letting itself and Julian Assange become the story. We’re not having the discussion we should have about U.S. government behavior because of Assange’s self-regard.

But now defenders of the U.S. government are making themselves the story, and they may be looking even worse than Wikileaks and Assange. (N.B.: Palantir has apologized to Greenwald.) That doesn’t mean that we will immediately focus on what Wikileaks has revealed about U.S. government behavior, but it could clear the deck for those conversations to happen.

The concept of “miscalculation” seems more prominent in international affairs and foreign policy than other fields, and it comes to mind here. Wikileaks and its opponents are joined in a negative duel around miscalculation. The side that miscalculates the least will have the upper hand.