Topic: Telecom, Internet & Information Policy

What NSA Director Mike Rogers Doesn’t Get About Encryption

At a  New America Foundation conference on cybersecurity Monday, NSA Director Mike Rogers gave an interview that—despite his best efforts to deal exclusively in uninformative platitudes—did produce a few lively moments. The most interesting of these came when techies in the audience—security guru Bruce Schneier and Yahoo’s chief information security officer Alex Stamos—challenged Rogers’ endorsement of a “legal framework” for requiring device manufacturers and telecommunications service providers to give the government backdoor access to their users’ encrypted communications. (Rogers repeatedly objected to the term “backdoor” on the grounds that it “sounds shady”—but that is quite clearly the correct technical term for what he’s seeking.) Rogers’ exchange with Stamos, transcribed by John Reed of Just Security, is particularly illuminating:

Alex Stamos (AS): “Thank you, Admiral. My name is Alex Stamos, I’m the CISO for Yahoo!. … So it sounds like you agree with Director Comey that we should be building defects into the encryption in our products so that the US government can decrypt…

Mike Rogers (MR): That would be your characterization. [laughing]

AS: No, I think Bruce Schneier and Ed Felton and all of the best public cryptographers in the world would agree that you can’t really build backdoors in crypto. That it’s like drilling a hole in the windshield.

MR: I’ve got a lot of world-class cryptographers at the National Security Agency.

AS: I’ve talked to some of those folks and some of them agree too, but…

MR: Oh, we agree that we don’t accept each others’ premise. [laughing]

AS: We’ll agree to disagree on that. So, if we’re going to build defects/backdoors or golden master keys for the US government, do you believe we should do so — we have about 1.3 billion users around the world — should we do for the Chinese government, the Russian government, the Saudi Arabian government, the Israeli government, the French government? Which of those countries should we give backdoors to?

MR: So, I’m not gonna… I mean, the way you framed the question isn’t designed to elicit a response.

AS: Well, do you believe we should build backdoors for other countries?

MR: My position is — hey look, I think that we’re lying that this isn’t technically feasible. Now, it needs to be done within a framework. I’m the first to acknowledge that. You don’t want the FBI and you don’t want the NSA unilaterally deciding, so, what are we going to access and what are we not going to access? That shouldn’t be for us. I just believe that this is achievable. We’ll have to work our way through it. And I’m the first to acknowledge there are international implications. I think we can work our way through this.

AS: So you do believe then, that we should build those for other countries if they pass laws?

MR: I think we can work our way through this.

AS: I’m sure the Chinese and Russians are going to have the same opinion.

MR: I said I think we can work through this.

I’ve written previously about why backdoor mandates are a horrible, horrible idea—and Stamos hits on some of the reasons I’ve pointed to in his question.   What’s most obviously disturbing here is that the head of the NSA didn’t even seem to have a bad response prepared to such an obvious objection—he has no serious response at all. China and Russia may not be able to force American firms like Google and Apple to redesign their products to be more spy-friendly, but if the American government does their dirty work for them with some form of legal backdoor mandate, those firms will be hard pressed to resist demands from repressive regimes to hand over the keys. Rogers’ unreflective response seems like a symptom of what a senior intelligence official once described to me as the “tyranny of the inbox”: A mindset so myopically focused on solving one’s own immediate practical problems that the bigger picture—the dangerous long-term consequences of the easiest or most obvious quick fix solution—are barely considered.

How the NSA Stole the Keys to Your Phone

A blockbuster story at The Intercept Thursday revealed that a joint team of hackers from the National Security Agency and its British counterpart, the Government Communications Headquarters (GCHQ), broke into the systems of one of the world’s largest manufacturers of cell phone SIM cards in order to steal the encryption keys that secure wireless communications for hundreds of mobile carriers—including companies like AT&T, T-Mobile, Verizon, and Sprint.  To effect the heist, the agencies targeted employees of the Dutch company Gemalto, scouring e-mails and Facebook messages for information that would enable them to compromise the SIM manufacturer’s networks in order to make surreptitious copies of the keys before they were transmitted to the carriers. Many aspects of this ought to be extremely disturbing.

First, this is a concrete reminder that, as former NSA director Michael Hayden recently acknowledged, intelligence agencies don’t spy on “bad people”; they spy on “interesting people.”  In this case, they spied extensively on law-abiding technicians employed by a law-abiding foreign corporation, then hacked that corporation in apparent  violation of Dutch law. We know this was hardly a unique case—one NSA hacker boasted in Snowden documents diclosed nearly a year ago about “hunting sysadmins”—but it seems particularly poetic coming on the heels of the recent Sony hack, properly condemned by the U.S. government.  Dutch legislators quoted in the story are outraged, as well they should be.  Peaceful private citizens and companies in allied nations, engaged in no wrongdoing, should not have to worry that the United States is trying to break into their computers.

Second, indiscriminate theft of mobile encryption keys bypasses one of the few checks on government surveillance by enabling wiretaps without the assistance of mobile carriers. On the typical model for wiretaps, a government presents the carrier with some form of legal process specifying which accounts or lines are targeted for surveillance, and the company then provides those communications to the government.  As the European telecom Vodaphone disclosed last summer, however, some governments insist on being granted “direct access” to the stream of communications so that they can conduct their wiretaps without going through the carrier.  The latter architecture, of course, is far more susceptible to abuse, because it removes the only truly independent, nongovernmental layer of review from the collection process. A spy agency that wished to abuse its power under the former model—by conducting wiretaps without legal authority or inventing pretexts to target political opponents—would at least have to worry that lawyers or technicians at the telecommunications provider might detect something amiss. But any entity armed with mobile encryption keys effectively enjoys direct access: they can vacuum up cellular signals out of the air and listen to any or all of the calls they intercept, subject only to internal checks or safeguards. 

There are, to be sure, times when going to the target’s carrier with legal process is not a viable option—because the company is outside the jurisdiction of the United States or our allies. Stealing phone keys in bulk is certainly a much easier solution to that problem than crafting interception strategies tailored to either the specific target or specific uncooperative foreign carriers. Unfortunately, the most convenient solution in this case is also a solution that gives the United States (or at least its intelligence community) a vested interest in the systematic insecurity of global communications infrastructure. We hear a great deal lately about the value of information sharing in cybersecurity: Well, here’s a case where NSA had information that the technology American citizens and companies rely on to protect their communications was not only vulnerable, but had in fact been compromised. Their mission is supposed to be to help us secure our communications networks—but having chosen the easy solution to the problem of conducting cellular wiretaps, their institutional incentives are to do just the opposite.

Finally, this is one more demonstration that proposals to require telecommunications providers and device manufacturers to build law enforcement backdoors in their products are a terrible, terrible idea. As security experts have rightly insisted all along, requiring companies to keep a repository of keys to unlock those backdoors makes the key repository itself a prime target for the most sophisticated attackers—like NSA and GCHQ. It would be both arrogant and foolhardy in the extreme to suppose that only “good” attackers will be successful in these efforts. 

Congress’s Blank-Check Bills

Luke Rosiak at the Washington Examiner filed a report late last week on a little recognized, but important congressional practice: proposing open-ended spending. In the last Congress, fully 700 bills proposed spending without limits. That’s a lot.

A quick primer: congressional spending is a two-step process. First, there must be an authorization of appropriations. Then Congress appropriates funds, providing actual authority for executive branch agencies to spend.

The committees in Congress are divided by type between authorizing committees and appropriations committees. Authorizers are supposed to do the bulk of the oversight and authorize spending at amounts they determine. Appropriators would then dole out funds specifically. But over the years, the division of labor has shifted and power has collected in the appropriations committees, whose members are often referred to as “cardinals” … like “College of Cardinals.”

Backward incentives explain this. Members of Congress who authorize spending naturally appear to be pro-spending, which has political costs. The costs are at their worst when a specific amount is involved. “Senator So-and-So wants to spend $50 million on what?!” So many authorizing committees shirk their duties by eschewing reauthorization of the agencies in their jurisdiction. And sometimes the trick is authorizing spending of “such sums as may be necessary,” which doesn’t provide as good an angle for political attack.

representatives who wrote the most blank checksThat would make appropriators the only drag on spending, but it doesn’t because of a second perversion in politics. Appropriators get good enough at gathering the political emoluments of spending that they overcome the negatives and become an institutional pro-spending bloc. As Mike Franc of the Heritage Foundation put it in 2011, “appropriators, their professional staff, and legions of lobbyists serve as a mutually reinforcing triad bent on increasing spending today, tomorrow, and forevermore.”

Rosiak notes that the House Republican leadership cautioned against open-ended spending proposals at the beginning of the 113th Congress. Consequently, Republican blank-check bills are more rare. The top open-ended spenders are all Democrats, and they’re all on the party’s left wing.

So what’s to be done?

In 2010, the Senate joined the House in banning earmarks. This came after a few short years of applied transparency in the earmark area, including a contest to gather earmark data conducted by yours truly on WashingtonWatch.com. A group called Taxpayers Against Earmarks (now Ending Spending) applied some direct pressure. And a host of other groups were involved, of course.

The practice of proposing open-ended spending could similarly be curtailed with public oversight and pressure.

So who should do that work?

We’ve already started. Rosiak’s story was produced using the Cato Institute’s Deepbills data.

Our New Cybersecurity Strategy: An Acronym Firewall

A couple weeks ago, I had a brief tour of the Department of Homeland Security’s National Cybersecurity and Communications Integration Center, which probably isn’t quite as snazzy as U.S. Cyber Command’s Star Trek–inspired bridge, but looks more or less like the movies have programmed you to expect: A long wall filled with enormous screens displaying maps with each state’s self-assessed “cyber threat level”; the volume of traffic to various government networks, and even one for NCCIC’s Twitter feed. It’s not clear that this setup serves much functional purpose given that the analysts working there are already using three-monitor workstations, but let’s face it, taking tour groups reared on Hollywood’s version through a non-descript office would be a little anticlimactic.  Which is to say, while the folks there are clearly doing some useful work, there’s an element of theater involved.

So too, it seems to me, with our political approach to cybersecurity more generally. The Washington Post reported Tuesday that the Obama administration plans to create a new Cyber Threat Intelligence Integration Center (CTIIC) within the Office of the Director of National Intelligence, which will join NCCIC and USCYBERCOM, as well as an array of private ISACs (Information Sharing and Analysis Centers) and CERTs (Computer Emergency Response Teams) on the digital front lines.  If firewalls made of acronyms could keep malware out, we’d be in fantastic shape.

The immediate reaction from both policy and security experts could best be described as “puzzled.”  After all, for several years we’ve been told that the Department of Homeland Security plays the lead role in coordinating the government’s cybersecurity efforts, and isn’t information sharing and integration pretty much what the NCCIC is supposed to be doing? That’s what it says on the tin, at any rate.  What, exactly, is supposed to be the advantage of spinning up an entirely new agency from scratch to share that mission?  Why would you house it in ODNI if your primary goal is to coax more information out of a wary and skeptical private sector?  Is there even good evidence that inadequate information “integration” is significantly to blame for the poor state of American cybersecurity? Our intelligence agencies, to be sure, could be doing a better job of sharing threat information with the private sector—but their own notorious culture of secrecy seems to be the limiting factor there. Even the White House’s own former cybersecurity coordinator, Melissa Hathaway, told the Post that “creating more organizations and bureaucracy” was unlikely to do much good.

My slightly cynical suspicion: Cybersecurity is just fundamentally hard, and given that it depends on the complex practices of many thousands of private network owners, there’s just not a whole lot the government can do to drastically improve matters—beyond, of course, being more willing to share their own intel and hardening the government’s own networks, which they don’t seem to be terribly good at. But cybersecurity is a Serious Problem about which Something Must Be Done, and so like the drunk in the old joke—who lost his keys in the dark, but is searching for them under a streetlamp because the light’s better there—we make a great show of doing the things government is able to do. And since internal tweaks designed to make existing agencies do those things more effectively won’t make headlines, thereby assuring the public that someone is on top of the problem, we get another spoonful of alphabet soup and another Hollywood command center to do the same thing with even bigger and more impressive wall monitors.  But as Amie Stepanovich of Access aptly told The Hill: “You don’t necessarily get your house in order by building new houses.”

Bitcoin Regulation: “Assume the Existence of Public Interest Benefits!”

You’ve probably heard some version of the joke about the chemist, the physicist, and the economist stranded on a desert island. With a can of food but nothing to open it, the first two set to work on ingenious technical methods of accessing nutrition. The economist declares his solution: “Assume the existence of a can opener!”…

There are parallels to this in some U.S. state regulators’ approaches to Bitcoin. Beginning with the New York Department of Financial Services six months ago, regulators have put proposals forward without articulating how their ideas would protect Bitcoin users. “Assume the existence of public interest benefits!” they seem to be saying.

When it issued its “BitLicense” proposal last August, the New York DFS claimed “[e]xtensive research and analysis” that it said “made clear the need for a new and comprehensive set of regulations that address the novel aspects and risks of virtual currency.” Yet, six months later, despite promises to do so under New York’s Freedom of Information Law, the NYDFS has not released that analysis, even while it has published a new “BitLicense” draft.

Yesterday, I filed comments with the Conference of State Bank Supervisors (CSBS) regarding their draft regulatory framework for digital currencies such as Bitcoin. CSBS is to be congratulated for taking a more methodical approach than New York. They’ve issued an outline and have called for discussion before coming up with regulatory language. But the CSBS proposal lacks an articulation of how it addresses unique challenges in the digital currency space. It simply contains a large batch of regulations similar to what is already found in the financial services world.

FCC’s Net Neutrality Nuclear Option

Proponents of network neutrality regulation are cheering the announcement this week that the Federal Communications Commission will seek to reclassify Internet Service Providers as “common carriers” under Title II of the Telecommunications Act. The move would trigger broad regulatory powers over Internet providers—some of which, such as authority to impose price controls, the FCC has said it will “forbear” from asserting—in the name of “preserving the open internet.”

Two initial thoughts:

First, the scope of the move reminds us that “net neutrality” has always been somewhat nebulously defined and therefore open to mission creep. To the extent there was any consensus definition, net neutrality was originally understood as being fundamentally about how ISPs like Comcast or Verizon treat data packets being sent to users, and whether the companies deliberately configured their routers to speed up or slow down certain traffic. Other factors that might affect the speed or quality of service—such as peering and interconnection agreements between ISPs and large content providers or backbone intermediaries—were understood to be a separate issue. In other words, net neutrality was satisfied so long as Comcast was treating packets equally once they’d reached Comcast’s network. Disputes over who should bear the cost of upgrading the connections between networks—though obviously relevant to the broader question of how quickly end-users could reach different services—were another matter.

Now the FCC will also concern itself with these contracts between corporations, giving content providers a fairly large cudgel to brandish against ISPs if they’re not happy with the peering terms on offer. In practice, even a “treat all packets equally” rule was going to be more complicated than it sounds on face, because the FCC would still have to distinguish between permitted “reasonable network management practices” and impermissible “packet discrimination.” But that’s simplicity itself next to the problem of determining, on a case by case basis, when the terms of a complex interconnection contract between two large corporations are “unfair” or “unreasonable.”

Second, it remains pretty incredible to me that we’re moving toward a broad preemptive regulatory intervention before we’ve even seen what deviations from neutrality look like in practice. Nobody, myself included, wants to see the “nightmare scenario” where ISPs attempt to turn the Internet into a “walled garden” whose users can only access the sites of their ISP’s corporate partners at usable speeds, or where ISPs act to throttle businesses that might interfere with their revenue streams from (say) cable television or voice services. There are certainly hypothetical scenarios that could play out where I’d agree intervention was justified—though I’d also expect targeted interventions by agencies like the Federal Trade Commission to be the most sensible first resort in those cases.

Does the Government Require Your Hotel to Spy on You?

If you’re a privacy conscious traveler, you may have wondered from time to time why hotels ask for ID when you check in, or why they ask you to give them the make and model of your car and other information that isn’t essential to the transaction. What’s the ID-checking for? There’s never been a problem with fraudsters checking into hotels under others’ reservations, paying for the privilege to do so…

Well, in many jurisdictions around the country, that information-gathering is mandated by law. Local ordinances require hotels, motels, and other lodgers (such as AirBnB hosts), to collect this information and keep it on hand. These laws also require that the information be made available to the police on request, for any reason or no reason, without a warrant.

That’s the case in Los Angeles, which not only requires this data retention about hotel guests for law enforcement to access at will or whim. It also requires hoteliers to check a government-issued ID from guests that pay cash.

Open access to hotel records may have been innocuous enough in the early years of travel and lodging. Reading through hotel registers was a social sport among the wealthy, who could afford long-distance travel and lodging. Today, tourism is available to the masses, and hotel records enjoy tighter privacy protections. Most people would quit a hotel that left their information open to the public, and many would be surprised that hoteliers’ records are open to law enforcement collection and review without any legal process.