Tag: risk management

Soviet-Style Cybersecurity Regulation

Reading over the cybersecurity legislative package recently introduced in the Senate is like reading a Soviet planning document. One of its fundamental flaws, if passed, would be its centralizing and deadening effect on society’s responses to the many and varied problems that are poorly captured by the word “cybersecurity.”

But I’m most struck by how, at every turn, this bill strains to release cybersecurity regulators—and their regulated entities—from the bonds of law. The Department of Homeland Security could commandeer private infrastructure into its regulatory regime simply by naming it “covered critical infrastructure.” DHS and a panel of courtesan institutes and councils would develop the regulatory regime outside of ordinary administrative processes. And—worst, perhaps—regulated entities would be insulated from ordinary legal liability if they were in compliance with government dictates. Regulatory compliance could start to usurp protection of the public as a corporate priority.

The bill retains privacy-threatening information-sharing language that I critiqued in no uncertain terms last week (Title VII), though the language has changed. (I have yet to analyze what effect those changes have.)

The news for Kremlin Beltway-watchers, of course, is that the Department of Homeland Security has won the upper-hand in the turf battle. (That’s the upshot of Title III of the bill.) It’s been a clever gambit of Washington’s to make the debate which agency should handle cybersecurity, rather than asking what the government’s role is and what it can actually contribute. Is it a small consolation that it’s a civilian security agency that gets to oversee Internet security for us, and not the military? None-of-the-above would have been the best choice of all.

Ah, but the government has access to secret information that nobody else does, doesn’t it? Don’t be so sure. Secrecy is a claim to authority that I reject. Many swoon to secrecy, assuming the government has 1) special information that is 2) actually helpful. I interpret secrecy as a failure to put facts into evidence. My assumption is the one consistent with accountable government and constitutional liberty. But we’re doing Soviet-style cybersecurity here, so let’s proceed.

Title I is the part of the bill that Sovietizes cybersecurity. It brings a welter of government agencies, boards, and institutes together with private-sector owners of government-deemed “critical infrastructure” to do sector-by-sector “cyber risk assessments” and to produce “cybersecurity performance requirements.” Companies would be penalized if they failed to certify to the government annually that they have “developed and effectively implemented security measures sufficient to satisfy the risk-based security performance requirements.” Twenty-first century paperwork violations. But in exchange, critical infrastructure owners would be insulated from liability (sec. 105(e))—a neat corporatist trade-off.

How poorly tuned these security-by-committee processes are. In just 90 days, the bill requires a “top-level assessment” of “cybersecurity threats, vulnerabilities, risks, and probability of a catastrophic incident across all critical infrastructure sectors” in order to guide the allocation of resources. That’s going to produce risk assessment with all the quality of a student term paper written overnight.

Though central planning is not the way to do cybersecurity at all, a serious risk assessment would take at least a year and it would be treated explicitly in the bill as a “final agency action” for purposes of judicial review under the Administrative Procedure Act. The likelihood of court review and reversal is the only thing that might cause this risk assessment to actually use a sound methodology. As it is, watch for it to be a political document that rehashes tired cyberslogans and anecdotes.

The same administrative rigor should be applied to other regulatory actions created by the bill, such as designations of “covered critical infrastructure,” for example. Amazingly, the bill requires no administrative law regularity (i.e., notice-and-comment rulemaking, agency methodology and decisions subject to court review) when the government designates private businesses as “covered critical infrastructure” (sec. 103), but if an owner of private infrastructure wants to contest those decisions, it does require administrative niceties (sec. 103(c)). In other words, the government can commandeer private businesses at whim. Getting your business out of the government’s maw will require leaden processes.

Hopefully, our courts will recognize that a “final agency action” has occurred at least when the Department of Homeland Security subjects privately owned infrastructure to special regulation, if not when it devises whatever plan or methodology to do so.

The same administrative defects exist in the section establishing “risk-based cybersecurity performance requirements.” The bill calls for the DHS and its courtesans to come up with these regulations without reference to administrative process (sec. 104). That’s what they are, though: regulations. Calling them “performance requirements” doesn’t make a damn bit of difference. When it came time to applying these regulatory requirements to regulated entities (sec. 105), then the DHS would “promulgate regulations.”

I can’t know what the authors of the bill are trying to achieve by bifurcating the content of the regulations with the application of the regulations to the private sector, but it seems intended to insulate the regulations from administrative procedures. It’s like the government saying that the menu is going to be made up outside of law—just the force-feeding is subject to administrative procedure. Hopefully, that won’t wash in the courts either.

This matters not only because the rule of law is an important abstraction. Methodical risk analsysis and methodical application of the law will tend to limit what things are deemed “covered critical infrastructure” and what the regulations on that infrastrtucture are. It will limit the number of things that fall within the privacy-threatening information sharing portion of the bill, too.

Outside of regular order, cybersecurity will tend to be flailing, spasmodic, political, and threatening to privacy and liberty. We should not want a system of Soviet-style regulatory dictates for that reason—and because it is unlikley to produce better cybersecurity.

The better systems for discovering and responding to cybersecurity risks are already in place. One is the system of profit and loss that companies enjoy or suffer when they succeed or fail to secure their assets. Another is common law liability, where failure to prevent harms to others produces legal liability and damage awards.

The resistance to regular legal processes in this bill is part and parcel of the stampede to regulate in the name of cybersecurity. It’s a move toward centralized regulatory command-and-control over large swaths of the economy through “cybersecurity.”

Should a Congress that Doesn’t Understand Math Regulate Cybersecurity?

There’s a delicious irony in some of the testimony on cybersecurity that the Senate Homeland Security and Governmental Affairs Committee will hear today (starting at 2:30 Eastern — it’s unclear from the hearing’s page whether it will be live-streamed). Former National Security Agency general counsel Stewart Baker flubs a basic mathematical concept.

If Congress credits his testimony, is it really equipped to regulate the Internet in the name of “cybersecurity”?

Baker’s written testimony (not yet posted) says, stirringly, “Our vulnerabilities, and their consequences, are growing at an exponential rate.” He’s stirring cake batter, though. Here’s why.

Exponential growth occurs when the growth rate of the value of a mathematical function is proportional to the function’s current value. It’s nicely illustrated with rabbits. If in week one you have two rabbits, and in week two you have four, you can expect eight rabbits in week three and sixteen in week four. That’s exponential growth. The number of rabbits each week dictates the number of rabbits the following week. By the end of the year, the earth will be covered in rabbits. (The Internet provides us an exponents calculator, you see. Try calculating 2^52.)

The vulnerabilities of computers, networks, and data may be growing. But such vulnerabilities are not a function of the number of transistors that can be placed on an integrated circuit. Baker is riffing on Moore’s Law, which describes long-term exponential growth in computing power.

Instead, vulnerabilities will generally be a function of the number of implementations of information technology. A new protocol may open one or more vulnerabilities. A new piece of software may have one or more vulnerabilities. A new chip design may have one or more vulnerabilities. Interactions between various protocols and pieces of hardware and software may create vulnerabilities. And so on. At worst, in some fields of information technology, there might be something like cubic growth in vulnerabilities, but it’s doubtful that such a trend could last.

Why? Because vulnerabilities are also regularly closing. Protocols get ironed out. Software bugs get patched. Bad chip designs get fixed.

There’s another dimension along which vulnerabilities are also probably growing. This would be a function of the “quantity” of information technology out there. If there are 10,000 instances of a given piece of software in use out there with a vulnerability, that’s 10,000 vulnerabilities. If there are 100,000 instances of it, that’s 10 times more vulnerabilities—but that’s still linear growth, not exponential growth. The number of vulnerabilities grows in direct proportion to the number of instances of the technology.

Ignore the downward pressure on vulnerabilities, though, and put growth in the number of vulnerabilities together with the growth in the propogation of vulnerabilities. Don’t you have exponential growth? No. You still have linear growth. The growth in vulnerability from new implementations of information technology and new instances of that technology multiply. Across technologies, they sum. They don’t act as exponents to one another.

Baker uses “vulnerability” and “threat” interchangeably, but careful thinkers about risk wouldn’t do this, I don’t think. Vulnerability is the existence of weakness. Threat is someone or something animated to exploit it (a “hazard” if that thing is inanimate). Vulnerabilities don’t really matter, in fact, if there isn’t anyone to exploit them. Do you worry about the number of hairs on your body being a source of pain? No, because nobody is going to come along and pluck them all. You need to have a threat vector, or vulnerability is just idle worry.

Now, threats can multiply quickly online. When exploits to some vulnerabilities are devised, their creators can propogate them quickly to others, such as “script kiddies” who will run such exploits everywhere they can. Hence, the significance of the “zero-day threat” and the importance of patching software promptly.

As to consequence, Baker cites examples of recent hacks on HBGary, RSA, Verisign, and DigiNotar, as well as weakness in industrial control systems. This says nothing about growth rates, much less how the number of hacks in the last year forms the basis for more in the next. If some hacks allow other hacks to be implemented, that, again, would be a multiplier, not an exponent. (Generally, these most worrisome hacks can’t be executed by script kiddes, so they are not soaring in numerosity. You know what happens to consequential hacks that do soar in numerosity? They’re foreclosed by patches.)

Vulnerability and threat analyses are inputs into determinations about the likelihood of bad things happening. The next step is to multiply that likelihood against consequence. The product is a sense of how important a given risk is. That’s risk assessment.

But Baker isn’t terribly interested in acute risk management. During his years as Assistant Secretary for Policy at the Department of Homeland Security, the agency didn’t do the risk management work that would validate or invalidate the strip-search machine/intrusive pat-down policy (and it still hasn’t, despite a court order). The bill he’s testifying in support of wouldn’t manage cybersecurity risks terribly well, either, for reasons I’ll articulate in a forthcoming post.

Do your representatives in Congress get the math involved here? Do they know the difference between exponential growth and linear growth? Do they “get” risk management? Chances are they don’t. They may even parrot the “statistic” that Baker is putting forth. How well equipped do you suppose a body like that is for telling you how to do your cybersecurity?

Should TSA Change Its Policy?

News that Transportation Security Administration officers required a 95-year-old cancer patient to remove her adult diaper for search lit up the social media this weekend. It’s reminiscent of the recent story where a 6-year-old girl got the pat-down because she didn’t hold still in the strip-search machine. TSA administrator John Pistole testified to a Senate hearing that the agency would change its policy about children shortly thereafter.

So, should the TSA change policy once again? Almost certainly. Will it ever arrive at balanced policies that aren’t punctuated by outrages like this? Almost certainly not.

You see, the TSA does not seek policies that anyone would call sensible or balanced. Rather, it follows political cues, subject to the bureaucratic prime directive described by Cato chairman emeritus and distinguished senior economist Bill Niskanen long ago: maximize discretionary budget.

When the TSA’s political cues pointed toward more intrusion, that’s where it went. Recall the agency’s obsession with small, sharp things early in its tenure, and the shoe fetish it adopted after Richard Reid demonstrated the potential hazards of footwear. Next came liquids after the revelation of a bomb plot around smuggling in sports bottles. And in December 2009, the underwear bomber focused the TSA on everyone’s pelvic region. Woe to the traveler whose medical condition requires her to wear something concealing the government’s latest fixation.

The TSA pursues the bureaucratic prime directive—maximize budget—by assuming, fostering, and acting on the maximum possible threat. So a decade after 9/11, TSA and Department of Homeland Security officials give strangely time-warped commentary whenever they speechify or testify, recalling the horrors of 2001 as if it’s 2003. The prime directive also helps explain why TSA has expanded its programs following each of the attempts on aviation since 9/11, even though each of them has failed. For a security agency, security threats are good for business. TSA will never seek balance, but will always promote threat as it offers the only solution: more TSA.

Because of countervailing threats to its budget—sufficient outrage on the part of the public—TSA will withdraw from certain policies from time to time. But there is no capacity among the public to sustain “outrage” until the agency is actually managing risk in a balanced and cost-effective way. (You can ignore official claims of “risk-based” policies until you’ve actually seen the risk management and cost-benefit documents.)

TSA should change its policy, yes, but its fundamental policies will not change. Episodes like this will continue indefinitely against a background of invasive, overwrought airline security that suppresses both the freedom to travel and the economic well-being of the country.

In a 2005 Reason magazine “debate” on airline security, I described the incentive structure that airlines and airports face, which is much more conducive to nesting security with convenience, privacy, savings, and overall traveler comfort and satisfaction. The threat of terrorism has only dropped since then. We should drop the TSA.

TSA’s Pistole Says ‘Risk-Based,’ Means ‘Privacy Invasive’

There is one thing you can take to the bank from TSA administrator John Pistole’s statement that he wants to shift to “risk-based” screening at airports: it hasn’t been risk-based up to now. That’s a welcome concession because, as I’ve said before, the DHS and its officials routinely mouth risk terminology, but rarely subject themselves to the rigor of actual risk analysis.

What Administrator Pistole envisions is nothing new. It’s the idea of checking the backgrounds of air travelers more deeply, attempting to determine which of them present less of a threat and which prevent more. That opens security holes that the risk-averse TSA is unlikely to actually tolerate, and it has significant privacy and Due Process consequences, including migration toward a national ID system. 

I wrote about one plan for a “trusted traveler”-type system recently. As the details of what Pistole envisions emerge, I’ll look forward to reviewing it.

The DHS Privacy Committee published a document several years ago that can help Pistole with developing an actual risk-based system and with managing its privacy consequences. The Privacy Committee itself exists to review programs like these, but has not been used for this purpose recently despite claims that it has.

If Pistole wants to shift to risk-based screening, he should require a full risk-based study of airport screening and publish it so that the public, commentators, and courts can compare the actual security benefits of the TSA’s policies with their costs in dollars, risk transfer, privacy, and constitutional values.

TSA’s Strip/Grope: Unconstitutional?

Writing in the Washington Post, George Washington University law professor Jeffrey Rosen carefully concludes, “there’s a strong argument that the TSA’s measures violate the Fourth Amendment, which prohibits unreasonable searches and seizures.” The strip/grope policy doesn’t carefully escalate through levels of intrusion the way a better designed program using more privacy protective technology could.

It’s a good constutional technician’s analysis. But Professor Rosen doesn’t broach one of the most important likely determinants of Fourth Amendment reasonableness: the risk to air travel these searches are meant to reduce.

Writing in Politico last week, I pointed out that there have been 99 million domestic flights in the last decade, transporting seven billion passengers. Not one of these passengers snuck a bomb onto a plane and detonated it. Given that this period coincides with the zenith of Al Qaeda terrorism, this suggests a very low risk.

Proponents of the TSA’s regime point out that threats are very high, according to information they have. But that trump card—secret threat information—is beginning to fail with the public. It would take longer, but would eventually fail with courts, too.

But rather than relying on courts to untie these knots, Congress should subject TSA and the Department of Homeland Security to measures that will ultimately answer the open risk questions: Require any lasting security measures to be justified on the public record with documented risk management and cost-benefit analysis. Subject such analyses to a standard of review such as the Adminstrative Procedure Act’s “arbitrary and capricious” standard. Indeed, Congress might make TSA security measures APA notice-and-comment rules, with appropriate accomodation for (truly) temporary measures required by security exigency.

Claims to secrecy are claims to power. Congress should withdraw the power of secrecy from the TSA and DHS, subjecting these agencies to the rule of law.

Does Risk Management Counsel in Favor of a Biometric Traveler Identity System?

Writing on Reason’s Hit & Run blog, Robert Poole argues that the Transportation Security Administration should use a risk-based approach to security. As I noted in my recent “’Strip-or-Grope’ vs. Risk Management” post, the Department of Homeland Security often talks about risk but fails to actually do risk management. Poole and I agree—everyone agrees—that DHS should use risk management. They just don’t.

With the pleasure of remembering our excellent 2005 Reason debate, “Transportation Security Aggravation,” I must again differ with Poole’s prescription, however.

Poole says TSA should separate travelers into three basic groups (quoting at length):

  1. Trusted Travelers, who have passed a background check and are issued a biometric ID card that proves (when they arrive at the security checkpoint) that they are the person who was cleared. This group would include cockpit crews, anyone holding a government security clearance, anyone already a member of the Department of Homeland Security’s Global Entry, Sentri, and Nexus, and anyone who applied and was accepted into a new Trusted Traveler program. These people would get to bypass regular security lanes  upon having their biometric card checked at the airport, subject only to random screening of a small fraction.
  2. High-risk travelers, either those about whom no information is known or who are flagged by the various Department of Homeland Security (DHS) intelligence lists as warranting “Selectee” status. They would be the only ones facing body-scanners or pat-downs as mandatory, routine screening.
  3. Ordinary travelers—basically everyone else, who would go through metal detector and put carry-ons through 2-D X-ray machines. They would not have to remove shoes or jackets, and could travel with liquids. A small fraction of this group would be subject to random “Selectee”-type screening.

He believes, and has argued for years, that dividing ”good guys” from “bad guys” will effectively secure. It’s certainly intuitive. Poole’s a good guy. I’m a good guy. You’re a good guy (in a non-gender-specific sense).

Knowing who people are works for us in every day life: Because we can find people who borrow our stuff, for example—and because we know that we can be found—we husband our behavior and generally don’t steal things from each other, we, the decent people with a stake in society.

Poole’s thinking takes our common experience and scales it up to a national program. Capture people’s identities, link enough biography to those identities, and—voila!—we know who the good guys are and who are the (potential) bad.

But precisely what biographical information assures that a person is “good”? (The proposal is for government action: it would be a violation of due process to keep the criteria secret and an equal protection violation to unfairly divide good and bad.) How do we know a person hasn’t gone bad from the time that their goodness was established?

The attacker we face with air security measures is not among the decent cohort whose behavior is channeled by identification. That attacker’s path to mischief is nicely mapped out by Poole’s proposal: Get into the Trusted Traveler group, or find someone who can get in it. (It’s easy to know if you’re a part of it. They give you a card! You can also test the system to see if you’ve been designated “high-risk” or “ordinary.”)

With a Trusted Traveler positioned to do wrong, chances are good that he or she won’t be subjected to screening and can carry whatever dangerous articles onto a plane. The end result? Predictable gnashing of teeth and wailing about a “failure to connect the dots.”

All this is not to say that Poole’s plan should not be adopted. If he can convince an airline of its merits, and the airline can convince its shareholders, insurers, airports, and their customers, they should implement the program to their heart’s content. They should reap the economic gain, too, when they prove that they have found a way to better serve the public’s safety, convenience, privacy, and transportation needs.

It is the TSA that should not implement this program. Along with what are significant security defects, it is the creation of a program that the government might use to control access to other goods, services, and infrastructure throughout society. The TSA would migrate toward conditioning all travel on having a government-issued biometric identity card. Fundamentally, the government should not be making these decisions or operating airline security systems.

A very interesting paper surfaced by recent public attention to this issue predicts that annual highway deaths will increase (from an already significant number) by between 11 and 275 because of people’s avoidance of privacy-invasive airport procedures. But what caught my eye in it were the following numbers:

During the past decade, terrorist attacks, with respect to air travel in the United States, have occurred three times involving six aircraft. Four planes were hijacked on 9/11, the shoe bomber incident occurred in December 2001, and, most recently, the Christmas Day underwear bomber attempted an attack in 2009. In that same span of time, over 99 million planes took off and landed within the United States, carrying over 7 billion passengers.

Especially because 9/11’s ”commandeering” attack on air travel has been essentially foreclosed by hardened cockpit doors and passenger/crew awareness, these numbers suggest the smallness of the chance that somone can elude worldwide investigatory pressure, prepare an explosive and detonator that actually work, smuggle both through conventional security, and successfully use them to take down a plane. It hasn’t happened in nearly 100 million flights.

This is not an argument to “let up” on security or to stop searching for measures that will cost-effectively drive the chance of attacker success even closer to zero.  But more thorough risk management analysis than mine or Bob Poole’s would probably show that accepting the above risk is preferable to either delaying and invading the bodily privacy of travelers or creating a biometric identity and background-check system.

“Risk of Accidents Ameliorated!” Doesn’t Sell Papers

What a headline on the Washington Examiner today! It’s a good illustration of the propensity of media to overplay terrorism.

“Terror threat to city water,” the headline blares in large type. “Chlorine changed to protect D.C., Va. supply.”

The actual story is about the Army Corps of Engineers’ switch from chlorine gas to a liquid form of chlorine called sodium hypochlorite. Gaseous chlorine is relatively more dangerous and difficult to contain if it’s released, so the change is a prudent safety step.

It has as much to do with protecting against accidental release as any terror threat. And an accidental release is not a threat to the water supply; it’s a threat to people near the facilities or transportation corridors where cholrine gas could be released.

The idea of terrorism may have gotten the Corps moving forward, but nothing in the story says there was any specific threat by anyone to attack the D.C. water treatment infrastructure.

This is a story about risks being ameliorated, and it’s pretty boring—except for the headline!!