Tag: cybersecurity

Ryan Radia Debates CISPA

I’m impressed with the job Ryan Radia of the Competitive Enterprise Institute did in this Federalist Society podcast/debate about “CISPA,” the Cyber Intelligence and Sharing Protection Act.

It’s also notable how his opponent Stewart Baker veers into a strange ad hominem against “privacy groups” in his rejoinder to Radia. Baker speaks as though arguable overbreadth in privacy statutes written years ago makes it appropriate to scythe down all law that might affect information sharing for cybersecurity purposes. That’s what language like “[n]otwithstanding any other provision of law” would do, and it’s in the current version of the bill three times.

On Breach of Decorum and Government Growth

Last week, the Center for Democracy and Technology changed its position on CISPA, the Cyber Intelligence Sharing and Protection Act, two times in short succession, easing the way for House passage of a bill profoundly threatening to privacy.

Declan McCullagh of C|Net wrote a story about it called “Advocacy Group Flip-Flops Twice Over CISPA Surveillance Bill.” In it, he quoted me saying: “A lot of people in Washington, D.C. think that working with CDT means working for good values like privacy. But CDT’s number one goal is having a seat at the table. And CDT will negotiate away privacy toward that end.”

That comment netted some interesting reactions. Some were gleeful about this “emperor-has-no-clothes” moment for CDT. To others, I was inappropriately “insulting” to the good people at CDT. This makes the whole thing worthy of further exploration. How could I say something mean like that about an organization whose staff spend so much time working in good faith on improving privacy protections? Some folks there absolutely do. This does not overcome the institutional role CDT often plays, which I have not found so creditable. (More on that below. Far below…)

First, though, let me illustrate how CDT helped smooth the way for passage of the bill:

Congress is nothing if not ignorant about cybersecurity. It has no idea what to do about the myriad problems that exist in securing computers, networks, and data. So its leaders have fixed on “information sharing” as a panacea.

Because the nature and scope of the problems are unknown, the laws that stand in the way of relevant information sharing are unknown. The solution? Scythe down as much law as possible. (What’s actually needed, most likely, is a narrow amendment to ECPA. Nothing of the sort is yet in the offing.) But this creates a privacy problem: an “information sharing” bill could facilitate promiscuous sharing of personal information with government agencies, including the NSA.

On the House floor last week, the leading Republican sponsor of CISPA, Mike Rogers (R-MI), spoke endlessly about privacy and civil liberties, the negotiations, and the process he had undertaken to try to resolve problems in the privacy area. At the close of debate on the rule that would govern debate on the bill, he said:

The amendments that are following here are months of negotiation and work with many organizations—privacy groups. We have worked language with the Center for Democracy and Technology, and they just the other day said they applauded our progress on where we’re going with privacy and civil liberties. So we have included a lot of folks.

You see, just days before, CDT had issued a blog post saying that it would “not oppose the process moving forward in the House.” The full text of that sentence is actually quite precious because it shows how little CDT got in exchange for publicly withdrawing opposition to the bill. Along with citing “good progress,” CDT president and CEO Leslie Harris wrote:

Recognizing the importance of the cybersecurity issue, in deference to the good faith efforts made by Chairman Rogers and Ranking Member Ruppersberger, and on the understanding that amendments will be considered by the House to address our concerns, we will not oppose the process moving forward in the House.

Cybersecurity is an important issue—nevermind whether the bill would actually help with it. The leadership of the House Intelligence Committee have acted in good faith. And amendments will evidently be forthcoming in the House. So go ahead and pass a bill not ready to become law, in light of “good progress.”

Then CDT got spun.

As McCullagh tells it:

The bill’s authors seized on CDT’s statement to argue that the anti-CISPA coalition was fragmenting, with an aide to House Intelligence Committee Chairman Mike Rogers (R-Mich.) sending reporters e-mail this morning, recalled a few minutes later, proclaiming: “CDT Drops Opposition to CISPA as Bill Moves to House Floor.” And the Information Technology Industry Council, which is unabashedly pro-CISPA, said it “applauds” the “agreement between CISPA sponsors and CDT.”

CDT quickly reversed itself, but the damage was done. Chairman Rogers could make an accurate but misleading floor statement omitting the fact that CDT had again reversed itself. This signaled to members of Congress and their staffs—who don’t pay close attention to subtle shifts in the views of organizations like CDT—that the privacy issues were under control. They could vote for CISPA without getting privacy blow-back. Despite furious efforts by groups like the Electronic Frontier Foundation and the ACLU, the bill passed 248 to 168.

Defenders of CDT will point out—accurately—that it argued laboriously for improvements to the bill. And with the bill’s passage inevitable, that was an essential benefit to the privacy side.

Well, yes and no. To get at that question, let’s talk about how groups represent the public’s interests in Washington, D.C. We’ll design a simplified representation game with the following cast of characters:

  • one powerful legislator, antagonistic to privacy, whose name is “S.R. Veillance”;
  • twenty privacy advocacy groups (Groups A through T); and
  • 20,000 people who rely on these advocacy groups to protect their privacy interests.

At the outset, the 20,000 people divide their privacy “chits”—that is, their donations and their willingness to act politically—equally among the groups. Based on their perceptions of the groups’ actions and relevance, the people re-assign their chits each legislative session.

Mr. Veillance has an anti-privacy bill he would like to get passed, but he knows it will meet resistance if he doesn’t get 2,500 privacy chits to signal that his bill isn’t that bad. If none of the groups give him any privacy chits, his legislation will not pass, so Mr. Veillance goes from group to group bargaining in good faith and signaling that he intends to do all he can to pass his bill. He will reward the groups that work with him by including such groups in future negotiations on future bills. He will penalize the groups that do not by excluding them from future negotiations.

What we have is a game somewhat like the prisoner’s dilemma in game theory. Though it is in the best interest of the society overall for the groups to cooperate and hold the line against a bill, individual groups can advantage themselves by “defecting” from the interests of all. These defectors will be at the table the next time an anti-privacy bill is negotiated.

Three groups—let’s say Group C, Group D, and Group T—defect from the pack. They make deals with Mr. Veillance to improve his bill, and in exchange they give him their privacy chits. He uses their 3,000 chits to signal to his colleagues that they can vote for the bill without fear of privacy-based repercussions.

At the end of the first round, Mr. Veillance has passed his anti-privacy legislation (though weakened, from his perspective). Groups C, D, and T did improve the bill, making it less privacy-invasive than it otherwise would have been, and they have also positioned themselves to be more relevant to future privacy debates because they will have a seat at the table. Hindsight makes the passage of the bill look inevitable, and CDT looks all the wiser for working with Sir Veillance while others futilely opposed the bill.

Thus, having defected, CDT is now able to get more of people’s privacy chits during the next legislative session, so they have more bargaining power and money than other privacy groups. That bargaining power is relevant, though, only if Mr. Veillance moves more bills in the future. To maintain its bargaining power and income, it is in the interest of CDT to see that legislation passes regularly. If anti-privacy legislation never passes, CDT’s unique role as a negotiator will not be valued and its ability to gather chits will diminish over time.

CDT plays a role in “improving” individual pieces of legislation to make them less privacy-invasive and it helps to ensure that improved—yet still privacy-invasive—legislation passes. Over the long run, to keep its seat at the table, CDT bargains away privacy.

This highly simplified representation game repeats itself across many issue-dimensions in every bill, and it involves many more, highly varied actors using widely differing influence “chits.” The power exchanges and signaling among parties ends up looking like a kaleidoscope rather than the linear story of an organization subtly putting its own goals ahead of the public interest.

Most people working in Washington, D.C., and almost assuredly everyone at CDT, have no awareness that they live under the collective action problem illustrated by this game. This is why government grows and privacy recedes.

In his article, McCullagh cites CDT founder Jerry Berman’s role in the 1994 passage of CALEA, the Communications Assistance to Law Enforcement Act. I took particular interest in CDT’s 2009 backing of the REAL ID revival bill, PASS ID. In 2006, CDT’s Jim Dempsey helped give privacy cover to the use of RFID in identification documents contrary to the principle that RFID is for products, not people. A comprehensive study of CDT’s institutional behavior to confirm or deny my theory of its behavior would be very complex and time-consuming.

But divide and conquer works well. My experience is that CDT is routinely the first defector from the privacy coalition despite the earnest good intentions of many individual CDTers. And it’s why I say, perhaps in breach of decorum, things like: “A lot of people in Washington, D.C. think that working with CDT means working for good values like privacy. But CDT’s number one goal is having a seat at the table. And CDT will negotiate away privacy toward that end.”

Cybersecurity Bills? No, Thanks

Prominent academics, experienced engineers, and professionals published an open letter to Congress yesterday, stating their opposition to CISPA and other overly broad cybersecurity bills. Highlight:

We take security very seriously, but we fervently believe that strong computer and network security does not require Internet users to sacrifice their privacy and civil liberties. The bills currently under consideration, including Rep. Rogers’ Cyber Intelligence Sharing and Protection Act of 2011 (H.R. 3523) and Sen. McCain’s SECURE IT Act (S. 2151), are drafted to allow entities who participate in relaying or receiving Internet traffic to freely monitor and redistribute those network communications. The bills nullify current legal protections against wiretapping and similar civil liberties violations for that kind of broad data sharing. By encouraging the transfer of users’ private communications to US Federal agencies, and lacking good public accountability or transparency, these “cybersecurity” bills unnecessarily trade our civil liberties for the promise of improved network security.

Cato’s recent Capitol Hill briefing on cybersecurity covered many similar points, and additional ones, too. CISPA and three other bills are scheduled for consideration on the House floor this week.

Cybersecurity: Talking Points vs. Substance

In the late stages of a legislative battle, it often comes down to “talking points.” Whoever puts out the message that sticks wins the debate—damn the substance.

Rep. Mike Rogers (R-MI) is prioritizing talking points over substance if a CQ report about a speech he gave to the Ripon Society is accurate. (He put it up on his Web site, from which one could infer endorsement. Rogers is not a cosponsor of SOPA, the Stop Online Piracy Act, so let’s not have the government taking down the house.gov domain just now, mkay?)

From the report:

“We’re finding language we can agree on,” he said in a speech to the Ripon Society, a moderate Republican group. “Are we going to agree on everything? Probably not. They don’t want anything, anytime, ever.” But, Rogers said, he hopes to give the groups “language that at least allows them to sleep at night, because I can’t sleep at night over these threats.”

This seems to suggest that a few tweaks to language, well in the works with the privacy community, will make his version of cybersecurity legislation a fait accompli. I’m a keen observer of the privacy groups, and I see no evidence that this is so. The bill is so broadly written that it is probably unrepairable.

And that is a product of Congress’s approach to this problem: Congress does not know how to address the thousands of difference problems that fall under the umbrella term “cybersecurity,” so it has fixed on promiscuous (and legally immunized) “information sharing” with government security agencies as the “solution.” Privacy can rightly be traded for other goods such as security, but with no benefits discernible from wanton information sharing, one shouldn’t expect sign-off from the privacy community.

That is not actually the message of the privacy community, who, on average, trust the government more than most conservatives and libertarians. The mainstream privacy community probably would accept highly regulatory and poorly formed cybersecurity legislation if it had enough privacy protections. But Rogers’ talking points try to push privacy folk onto the “unreasonable” part of the chess board, saying, “They don’t want anything, anytime, ever.”

That’s closer to my view than anything the orthodox privacy advocates are saying. Cybersecurity is not an area where the federal government can do much to help. But even I said in my 2009 testimony to the House Science Committee that the federal government has a role in improving cybersecurity: being a smart consumer that influences technology markets for the better.

What Representative Rogers—and all advocates for cybersecurity legislation—have failed to do is to make the affirmative case for their bills. “I can’t sleep at night” is not an answer to the case, carefully made by Jerry Brito of the Mercatus Center at Cato’s recent Hill briefing, that the threat from cyberattacks is overblown.

The briefing was called “Cybersecurity: Will Federal Regulation Help?” That’s a place one can go for substance.

From Cybercrime Statistics to Cyberspying

Someone finally decided to examine “cybercrime” statistics, and here’s what they found:

The cybercrime surveys we have examined exhibit [a] pattern of enormous, unverified outliers dominating the data. In some, 90 percent of the estimate appears to come from the answers of one or two individuals. In a 2006 survey of identity theft by the Federal Trade Commission, two respondents gave answers that would have added $37 billion to the estimate, dwarfing that of all other respondents combined. This is not simply a failure to achieve perfection or a matter of a few percentage points; it is the rule, rather than the exception. Among dozens of surveys, from security vendors, industry analysts and government agencies, we have not found one that appears free of this upward bias.

That’s Dinei Florêncio and Cormac Herley of Microsoft Research in a New York Times piece entitled: “The Cybercrime Wave That Wasn’t.”

You see, cybercrime statistics have been generated using surveys of individuals and businesses, but you can’t generate valid numerical results that way. An opinion poll’s errors will naturally cancel out—there are a roughly equal number of wrongly stated “thumbs-up”s and “thumbs-down”s.

When you ask people to estimate losses, though, they can never estimate less than zero, so errors will always push results to the high side. High-side errors extrapolated society-wide drive the perception that cybercrime is out of control.

There are more drivers of excess insecurity than just bad loss estimates. There are also data breach notification laws, which require data holders to report various kinds of personal data spillage. These reports are the high-tech, grown-up version of a favorite schoolyard taunt: “Your epidermis is showing!” Epidermis is, of course, a scientific name for skin. It often doesn’t matter that one’s epidermis is showing. The questions are: What part of the epidermis? And what social or economic consequences does it have?

Most breached data is put to no use whatsoever. A 2005 study of data breaches found the highest fraudulent misuse rate for all breaches under examination to be 0.098 percent—less than one in 1,000 identities. (The Government Accountability Office concurs that misuse of breached data is rare.) Larger breaches tend to have lower misuse rates, which makes popular reporting on gross numbers of personal data breaches misleading. Identity frauds are limited by the time and difficulty of executing them, not by access to data.

Why does excess cyber-insecurity matter? Doesn’t it beneficially drive companies to adopt better security practices for personal data?

It undoubtedly does, but security is not costless, and money driven to data security measures comes from other uses that might do more to make consumers better off. More importantly, though, data breach agitation and distended crime statistics have joined with other cybersecurity hype to generate a commitment in Congress to pass cybersecurity legislation.

Cybersecurity bills pending in both the House and Senate could have gruesome consequences for privacy because of “information sharing” provisions that immunize companies sharing data with the government for cybersecurity purposes. The potential for a huge, lawless cyberspying operation is significant if anyone can feed data to the government free of liability, including the privacy protections in property law, torts, and contract. Congress would not improve things by regulating in the name of cybersecurity, and it just might make things a lot worse.

It is ironic that overwrought claims about cybercrime and data breach could be privacy’s undoing, but they just might.

Cybersecurity Hype

The approving response of an IT security professional last week pointed me to a story about cybersecurity in which I’m featured. The story and accompanying video are called: “Is Cyberwar Hype Fuelling a Cybersecurity-Industrial Complex?” It’s a really good look at how government contractors, many former government officials, are working Washington to generate an issue.

How rare is it that a cybersecurity news report includes even a word of doubt about the nature and scope of the threat? How rare is it that any news report includes a word of doubt about the nature and scope of threats?

My correspondent, who works at a public utility in IT security, said some things that are fascinating and important.

We are being asked to do things that have no practical risk reduction value purely for the perceived benefit. It takes no effort to say that the cyber world is about to end yet it takes tremendous effort to continually demonstrate that we are prepared for anything.

In other words, operators of so-called “critical infrastructure” are already wasting effort on things that look like improved security because they’re in the position of proving that nothing could ever go wrong. This is because cybersecurity fear-mongerers are spinning apocalyptic tales. Imagine what it will be like when varied government bureaucracies are calling on the private sector to prove they are implementing endlessly varying, imagination-based federal cybersecurity dictates.

Now, a few caveats are in order: Cybersecurity is a real problem, and there are many challenges presented to all organs of society in securing computers, networks, and data. I’m quoted in the story saying there is “no chance whatsoever” that nuclear power plants and electric infrastructure would be hacked and taken down for any significant period of time. The more accurate phrasing would have been that the chance is “exceedingly small.” The point remains that these problems have nothing of the scale or significance of the war or terrorism (except to the extent that terrorism is also an important but entirely manageable problem).

In the event of some future, modest-consequence event, I fully expect to be called out as having been a Panglossian cybersecurity naysayer. (It’s a tactic one would expect from advocates who misstate basic math to hype threats.) Not so. I expect some bad things to occur. I don’t believe that centralizing our country’s cybersecurity efforts with the federal government would position us better to prevent them or respond to them.

Soviet-Style Cybersecurity Regulation

Reading over the cybersecurity legislative package recently introduced in the Senate is like reading a Soviet planning document. One of its fundamental flaws, if passed, would be its centralizing and deadening effect on society’s responses to the many and varied problems that are poorly captured by the word “cybersecurity.”

But I’m most struck by how, at every turn, this bill strains to release cybersecurity regulators—and their regulated entities—from the bonds of law. The Department of Homeland Security could commandeer private infrastructure into its regulatory regime simply by naming it “covered critical infrastructure.” DHS and a panel of courtesan institutes and councils would develop the regulatory regime outside of ordinary administrative processes. And—worst, perhaps—regulated entities would be insulated from ordinary legal liability if they were in compliance with government dictates. Regulatory compliance could start to usurp protection of the public as a corporate priority.

The bill retains privacy-threatening information-sharing language that I critiqued in no uncertain terms last week (Title VII), though the language has changed. (I have yet to analyze what effect those changes have.)

The news for Kremlin Beltway-watchers, of course, is that the Department of Homeland Security has won the upper-hand in the turf battle. (That’s the upshot of Title III of the bill.) It’s been a clever gambit of Washington’s to make the debate which agency should handle cybersecurity, rather than asking what the government’s role is and what it can actually contribute. Is it a small consolation that it’s a civilian security agency that gets to oversee Internet security for us, and not the military? None-of-the-above would have been the best choice of all.

Ah, but the government has access to secret information that nobody else does, doesn’t it? Don’t be so sure. Secrecy is a claim to authority that I reject. Many swoon to secrecy, assuming the government has 1) special information that is 2) actually helpful. I interpret secrecy as a failure to put facts into evidence. My assumption is the one consistent with accountable government and constitutional liberty. But we’re doing Soviet-style cybersecurity here, so let’s proceed.

Title I is the part of the bill that Sovietizes cybersecurity. It brings a welter of government agencies, boards, and institutes together with private-sector owners of government-deemed “critical infrastructure” to do sector-by-sector “cyber risk assessments” and to produce “cybersecurity performance requirements.” Companies would be penalized if they failed to certify to the government annually that they have “developed and effectively implemented security measures sufficient to satisfy the risk-based security performance requirements.” Twenty-first century paperwork violations. But in exchange, critical infrastructure owners would be insulated from liability (sec. 105(e))—a neat corporatist trade-off.

How poorly tuned these security-by-committee processes are. In just 90 days, the bill requires a “top-level assessment” of “cybersecurity threats, vulnerabilities, risks, and probability of a catastrophic incident across all critical infrastructure sectors” in order to guide the allocation of resources. That’s going to produce risk assessment with all the quality of a student term paper written overnight.

Though central planning is not the way to do cybersecurity at all, a serious risk assessment would take at least a year and it would be treated explicitly in the bill as a “final agency action” for purposes of judicial review under the Administrative Procedure Act. The likelihood of court review and reversal is the only thing that might cause this risk assessment to actually use a sound methodology. As it is, watch for it to be a political document that rehashes tired cyberslogans and anecdotes.

The same administrative rigor should be applied to other regulatory actions created by the bill, such as designations of “covered critical infrastructure,” for example. Amazingly, the bill requires no administrative law regularity (i.e., notice-and-comment rulemaking, agency methodology and decisions subject to court review) when the government designates private businesses as “covered critical infrastructure” (sec. 103), but if an owner of private infrastructure wants to contest those decisions, it does require administrative niceties (sec. 103(c)). In other words, the government can commandeer private businesses at whim. Getting your business out of the government’s maw will require leaden processes.

Hopefully, our courts will recognize that a “final agency action” has occurred at least when the Department of Homeland Security subjects privately owned infrastructure to special regulation, if not when it devises whatever plan or methodology to do so.

The same administrative defects exist in the section establishing “risk-based cybersecurity performance requirements.” The bill calls for the DHS and its courtesans to come up with these regulations without reference to administrative process (sec. 104). That’s what they are, though: regulations. Calling them “performance requirements” doesn’t make a damn bit of difference. When it came time to applying these regulatory requirements to regulated entities (sec. 105), then the DHS would “promulgate regulations.”

I can’t know what the authors of the bill are trying to achieve by bifurcating the content of the regulations with the application of the regulations to the private sector, but it seems intended to insulate the regulations from administrative procedures. It’s like the government saying that the menu is going to be made up outside of law—just the force-feeding is subject to administrative procedure. Hopefully, that won’t wash in the courts either.

This matters not only because the rule of law is an important abstraction. Methodical risk analsysis and methodical application of the law will tend to limit what things are deemed “covered critical infrastructure” and what the regulations on that infrastrtucture are. It will limit the number of things that fall within the privacy-threatening information sharing portion of the bill, too.

Outside of regular order, cybersecurity will tend to be flailing, spasmodic, political, and threatening to privacy and liberty. We should not want a system of Soviet-style regulatory dictates for that reason—and because it is unlikley to produce better cybersecurity.

The better systems for discovering and responding to cybersecurity risks are already in place. One is the system of profit and loss that companies enjoy or suffer when they succeed or fail to secure their assets. Another is common law liability, where failure to prevent harms to others produces legal liability and damage awards.

The resistance to regular legal processes in this bill is part and parcel of the stampede to regulate in the name of cybersecurity. It’s a move toward centralized regulatory command-and-control over large swaths of the economy through “cybersecurity.”