Topic: Telecom, Internet & Information Policy

New Hampshire Ends Brief Flirtation with National ID Compliance

When the REAL ID Act passed in 2005, Senator Joe Lieberman (D-CT), no civil libertarian, called the national ID law “unworkable” for good reason. It seeks to herd all Americans into a national ID system by coercing states into issuing drivers licenses (and sharing information about their drivers) according to complex federal standards.

The hook REAL ID uses in seeking to dragoon states into compliance is the threat that TSA agents will refuse IDs from non-complying states at our nation’s airports. The threat is an empty one. Consistently over years, every time a DHS-created compliance deadline has come around, state leaders with spines have backed the Department of Homeland Security down. I detailed the years-long saga of pushed-back deadlines last year in the Cato Policy Analysis, “REAL ID: A State-by-State Update.”

DHS has stopped publishing deadline changes in the Federal Register–perhaps the endless retreats were getting embarrassing–and now it has simply said on its website that TSA enforcement will begin sometime in 2016. But it’s evidently back-channeling threats to state officials. Those folks–unaware that REAL ID doesn’t work, and disinterested in the allocation of state and federal power–are lobbying their state legislatures to get on board with the national ID program.

New York Proposes Special Bitcoin Regulation, But Won’t Say Why

Yesterday, the New York Department of Financial Services (NYDFS) issued the second draft of its “BitLicense” proposal, a special, technology-specific regulation for digital currencies like Bitcoin. For a second time, the NYDFS claims to have a strong rationale for such regulation, but it has not revealed its rationale to the public, even though it is required to do so by New York’s Freedom of Information Law.

If you’re just joining the “BitLicense” saga, the NYDFS welcomed Bitcoin in August 2013 by subpoenaing every important person in the Bitcoin world. A few months later, New York’s Superintendent of Financial Services announced his plan for a special “BitLicense,” which would be required of anyone wanting to provide Bitcoin-based services in New York.

About a year later, Superintendent Lawsky released the first draft of the “BitLicense” proposal, to strongly negative reviews from the Bitcoin community. It didn’t help that after a year’s work the NYDFS offered the statutory minimum of 45 days to comment. Relenting to public demand, the NYDFS extended the comment period.

In announcing the regulation, the NYDFS cited “extensive research and analysis” that it said justifies placing unique regulatory burdens on Bitcoin businesses. On behalf of the Bitcoin Foundation, yours truly asked to see that “extensive research and analysis” under New York’s Freedom of Information Law. The agency quickly promised timely access, but in early September last year it reversed itself and said that it may not release its research until December.

What NSA Director Mike Rogers Doesn’t Get About Encryption

At a  New America Foundation conference on cybersecurity Monday, NSA Director Mike Rogers gave an interview that—despite his best efforts to deal exclusively in uninformative platitudes—did produce a few lively moments. The most interesting of these came when techies in the audience—security guru Bruce Schneier and Yahoo’s chief information security officer Alex Stamos—challenged Rogers’ endorsement of a “legal framework” for requiring device manufacturers and telecommunications service providers to give the government backdoor access to their users’ encrypted communications. (Rogers repeatedly objected to the term “backdoor” on the grounds that it “sounds shady”—but that is quite clearly the correct technical term for what he’s seeking.) Rogers’ exchange with Stamos, transcribed by John Reed of Just Security, is particularly illuminating:

Alex Stamos (AS): “Thank you, Admiral. My name is Alex Stamos, I’m the CISO for Yahoo!. … So it sounds like you agree with Director Comey that we should be building defects into the encryption in our products so that the US government can decrypt…

Mike Rogers (MR): That would be your characterization. [laughing]

AS: No, I think Bruce Schneier and Ed Felton and all of the best public cryptographers in the world would agree that you can’t really build backdoors in crypto. That it’s like drilling a hole in the windshield.

MR: I’ve got a lot of world-class cryptographers at the National Security Agency.

AS: I’ve talked to some of those folks and some of them agree too, but…

MR: Oh, we agree that we don’t accept each others’ premise. [laughing]

AS: We’ll agree to disagree on that. So, if we’re going to build defects/backdoors or golden master keys for the US government, do you believe we should do so — we have about 1.3 billion users around the world — should we do for the Chinese government, the Russian government, the Saudi Arabian government, the Israeli government, the French government? Which of those countries should we give backdoors to?

MR: So, I’m not gonna… I mean, the way you framed the question isn’t designed to elicit a response.

AS: Well, do you believe we should build backdoors for other countries?

MR: My position is — hey look, I think that we’re lying that this isn’t technically feasible. Now, it needs to be done within a framework. I’m the first to acknowledge that. You don’t want the FBI and you don’t want the NSA unilaterally deciding, so, what are we going to access and what are we not going to access? That shouldn’t be for us. I just believe that this is achievable. We’ll have to work our way through it. And I’m the first to acknowledge there are international implications. I think we can work our way through this.

AS: So you do believe then, that we should build those for other countries if they pass laws?

MR: I think we can work our way through this.

AS: I’m sure the Chinese and Russians are going to have the same opinion.

MR: I said I think we can work through this.

I’ve written previously about why backdoor mandates are a horrible, horrible idea—and Stamos hits on some of the reasons I’ve pointed to in his question.   What’s most obviously disturbing here is that the head of the NSA didn’t even seem to have a bad response prepared to such an obvious objection—he has no serious response at all. China and Russia may not be able to force American firms like Google and Apple to redesign their products to be more spy-friendly, but if the American government does their dirty work for them with some form of legal backdoor mandate, those firms will be hard pressed to resist demands from repressive regimes to hand over the keys. Rogers’ unreflective response seems like a symptom of what a senior intelligence official once described to me as the “tyranny of the inbox”: A mindset so myopically focused on solving one’s own immediate practical problems that the bigger picture—the dangerous long-term consequences of the easiest or most obvious quick fix solution—are barely considered.

How the NSA Stole the Keys to Your Phone

A blockbuster story at The Intercept Thursday revealed that a joint team of hackers from the National Security Agency and its British counterpart, the Government Communications Headquarters (GCHQ), broke into the systems of one of the world’s largest manufacturers of cell phone SIM cards in order to steal the encryption keys that secure wireless communications for hundreds of mobile carriers—including companies like AT&T, T-Mobile, Verizon, and Sprint.  To effect the heist, the agencies targeted employees of the Dutch company Gemalto, scouring e-mails and Facebook messages for information that would enable them to compromise the SIM manufacturer’s networks in order to make surreptitious copies of the keys before they were transmitted to the carriers. Many aspects of this ought to be extremely disturbing.

First, this is a concrete reminder that, as former NSA director Michael Hayden recently acknowledged, intelligence agencies don’t spy on “bad people”; they spy on “interesting people.”  In this case, they spied extensively on law-abiding technicians employed by a law-abiding foreign corporation, then hacked that corporation in apparent  violation of Dutch law. We know this was hardly a unique case—one NSA hacker boasted in Snowden documents diclosed nearly a year ago about “hunting sysadmins”—but it seems particularly poetic coming on the heels of the recent Sony hack, properly condemned by the U.S. government.  Dutch legislators quoted in the story are outraged, as well they should be.  Peaceful private citizens and companies in allied nations, engaged in no wrongdoing, should not have to worry that the United States is trying to break into their computers.

Second, indiscriminate theft of mobile encryption keys bypasses one of the few checks on government surveillance by enabling wiretaps without the assistance of mobile carriers. On the typical model for wiretaps, a government presents the carrier with some form of legal process specifying which accounts or lines are targeted for surveillance, and the company then provides those communications to the government.  As the European telecom Vodaphone disclosed last summer, however, some governments insist on being granted “direct access” to the stream of communications so that they can conduct their wiretaps without going through the carrier.  The latter architecture, of course, is far more susceptible to abuse, because it removes the only truly independent, nongovernmental layer of review from the collection process. A spy agency that wished to abuse its power under the former model—by conducting wiretaps without legal authority or inventing pretexts to target political opponents—would at least have to worry that lawyers or technicians at the telecommunications provider might detect something amiss. But any entity armed with mobile encryption keys effectively enjoys direct access: they can vacuum up cellular signals out of the air and listen to any or all of the calls they intercept, subject only to internal checks or safeguards. 

There are, to be sure, times when going to the target’s carrier with legal process is not a viable option—because the company is outside the jurisdiction of the United States or our allies. Stealing phone keys in bulk is certainly a much easier solution to that problem than crafting interception strategies tailored to either the specific target or specific uncooperative foreign carriers. Unfortunately, the most convenient solution in this case is also a solution that gives the United States (or at least its intelligence community) a vested interest in the systematic insecurity of global communications infrastructure. We hear a great deal lately about the value of information sharing in cybersecurity: Well, here’s a case where NSA had information that the technology American citizens and companies rely on to protect their communications was not only vulnerable, but had in fact been compromised. Their mission is supposed to be to help us secure our communications networks—but having chosen the easy solution to the problem of conducting cellular wiretaps, their institutional incentives are to do just the opposite.

Finally, this is one more demonstration that proposals to require telecommunications providers and device manufacturers to build law enforcement backdoors in their products are a terrible, terrible idea. As security experts have rightly insisted all along, requiring companies to keep a repository of keys to unlock those backdoors makes the key repository itself a prime target for the most sophisticated attackers—like NSA and GCHQ. It would be both arrogant and foolhardy in the extreme to suppose that only “good” attackers will be successful in these efforts. 

Congress’s Blank-Check Bills

Luke Rosiak at the Washington Examiner filed a report late last week on a little recognized, but important congressional practice: proposing open-ended spending. In the last Congress, fully 700 bills proposed spending without limits. That’s a lot.

A quick primer: congressional spending is a two-step process. First, there must be an authorization of appropriations. Then Congress appropriates funds, providing actual authority for executive branch agencies to spend.

The committees in Congress are divided by type between authorizing committees and appropriations committees. Authorizers are supposed to do the bulk of the oversight and authorize spending at amounts they determine. Appropriators would then dole out funds specifically. But over the years, the division of labor has shifted and power has collected in the appropriations committees, whose members are often referred to as “cardinals” … like “College of Cardinals.”

Backward incentives explain this. Members of Congress who authorize spending naturally appear to be pro-spending, which has political costs. The costs are at their worst when a specific amount is involved. “Senator So-and-So wants to spend $50 million on what?!” So many authorizing committees shirk their duties by eschewing reauthorization of the agencies in their jurisdiction. And sometimes the trick is authorizing spending of “such sums as may be necessary,” which doesn’t provide as good an angle for political attack.

representatives who wrote the most blank checksThat would make appropriators the only drag on spending, but it doesn’t because of a second perversion in politics. Appropriators get good enough at gathering the political emoluments of spending that they overcome the negatives and become an institutional pro-spending bloc. As Mike Franc of the Heritage Foundation put it in 2011, “appropriators, their professional staff, and legions of lobbyists serve as a mutually reinforcing triad bent on increasing spending today, tomorrow, and forevermore.”

Rosiak notes that the House Republican leadership cautioned against open-ended spending proposals at the beginning of the 113th Congress. Consequently, Republican blank-check bills are more rare. The top open-ended spenders are all Democrats, and they’re all on the party’s left wing.

So what’s to be done?

In 2010, the Senate joined the House in banning earmarks. This came after a few short years of applied transparency in the earmark area, including a contest to gather earmark data conducted by yours truly on WashingtonWatch.com. A group called Taxpayers Against Earmarks (now Ending Spending) applied some direct pressure. And a host of other groups were involved, of course.

The practice of proposing open-ended spending could similarly be curtailed with public oversight and pressure.

So who should do that work?

We’ve already started. Rosiak’s story was produced using the Cato Institute’s Deepbills data.

Our New Cybersecurity Strategy: An Acronym Firewall

A couple weeks ago, I had a brief tour of the Department of Homeland Security’s National Cybersecurity and Communications Integration Center, which probably isn’t quite as snazzy as U.S. Cyber Command’s Star Trek–inspired bridge, but looks more or less like the movies have programmed you to expect: A long wall filled with enormous screens displaying maps with each state’s self-assessed “cyber threat level”; the volume of traffic to various government networks, and even one for NCCIC’s Twitter feed. It’s not clear that this setup serves much functional purpose given that the analysts working there are already using three-monitor workstations, but let’s face it, taking tour groups reared on Hollywood’s version through a non-descript office would be a little anticlimactic.  Which is to say, while the folks there are clearly doing some useful work, there’s an element of theater involved.

So too, it seems to me, with our political approach to cybersecurity more generally. The Washington Post reported Tuesday that the Obama administration plans to create a new Cyber Threat Intelligence Integration Center (CTIIC) within the Office of the Director of National Intelligence, which will join NCCIC and USCYBERCOM, as well as an array of private ISACs (Information Sharing and Analysis Centers) and CERTs (Computer Emergency Response Teams) on the digital front lines.  If firewalls made of acronyms could keep malware out, we’d be in fantastic shape.

The immediate reaction from both policy and security experts could best be described as “puzzled.”  After all, for several years we’ve been told that the Department of Homeland Security plays the lead role in coordinating the government’s cybersecurity efforts, and isn’t information sharing and integration pretty much what the NCCIC is supposed to be doing? That’s what it says on the tin, at any rate.  What, exactly, is supposed to be the advantage of spinning up an entirely new agency from scratch to share that mission?  Why would you house it in ODNI if your primary goal is to coax more information out of a wary and skeptical private sector?  Is there even good evidence that inadequate information “integration” is significantly to blame for the poor state of American cybersecurity? Our intelligence agencies, to be sure, could be doing a better job of sharing threat information with the private sector—but their own notorious culture of secrecy seems to be the limiting factor there. Even the White House’s own former cybersecurity coordinator, Melissa Hathaway, told the Post that “creating more organizations and bureaucracy” was unlikely to do much good.

My slightly cynical suspicion: Cybersecurity is just fundamentally hard, and given that it depends on the complex practices of many thousands of private network owners, there’s just not a whole lot the government can do to drastically improve matters—beyond, of course, being more willing to share their own intel and hardening the government’s own networks, which they don’t seem to be terribly good at. But cybersecurity is a Serious Problem about which Something Must Be Done, and so like the drunk in the old joke—who lost his keys in the dark, but is searching for them under a streetlamp because the light’s better there—we make a great show of doing the things government is able to do. And since internal tweaks designed to make existing agencies do those things more effectively won’t make headlines, thereby assuring the public that someone is on top of the problem, we get another spoonful of alphabet soup and another Hollywood command center to do the same thing with even bigger and more impressive wall monitors.  But as Amie Stepanovich of Access aptly told The Hill: “You don’t necessarily get your house in order by building new houses.”

Bitcoin Regulation: “Assume the Existence of Public Interest Benefits!”

You’ve probably heard some version of the joke about the chemist, the physicist, and the economist stranded on a desert island. With a can of food but nothing to open it, the first two set to work on ingenious technical methods of accessing nutrition. The economist declares his solution: “Assume the existence of a can opener!”…

There are parallels to this in some U.S. state regulators’ approaches to Bitcoin. Beginning with the New York Department of Financial Services six months ago, regulators have put proposals forward without articulating how their ideas would protect Bitcoin users. “Assume the existence of public interest benefits!” they seem to be saying.

When it issued its “BitLicense” proposal last August, the New York DFS claimed “[e]xtensive research and analysis” that it said “made clear the need for a new and comprehensive set of regulations that address the novel aspects and risks of virtual currency.” Yet, six months later, despite promises to do so under New York’s Freedom of Information Law, the NYDFS has not released that analysis, even while it has published a new “BitLicense” draft.

Yesterday, I filed comments with the Conference of State Bank Supervisors (CSBS) regarding their draft regulatory framework for digital currencies such as Bitcoin. CSBS is to be congratulated for taking a more methodical approach than New York. They’ve issued an outline and have called for discussion before coming up with regulatory language. But the CSBS proposal lacks an articulation of how it addresses unique challenges in the digital currency space. It simply contains a large batch of regulations similar to what is already found in the financial services world.

Pages