Topic: Telecom, Internet & Information Policy

The Unpersuasive Case for the NSA Call Dragnet’s Effectiveness

Sen. Dianne Feinstein (D-CA) has an op-ed in the Wall Street Journal ($) defending the NSA’s bulk call records database as a “vital” counterterrorism tool.  While this wouldn’t make the program legal even if true, it also seems clear that the secret Foreign Intelligence Surveillance Court (FISC) has relied, rather uncritically, on the government’s assertions of “necessity” to draw the strained conclusion that every American’s phone records are “relevant” to FBI counterterrorism investigations. It’s thus worth pointing out how extraordinarily weak the case for the program’s utility really is.  Feinstein begins by recycling the claim that if only the NSA program had existed in 2001, the 9/11 hijackers could have been identified and halted before carrying out their catastrophic attack:

Intelligence officials knew about an al Qaeda safe house in Yemen with ties to [hijacker Khalid] al-Mihdhar as well as the safe house’s telephone number, but they had no way of knowing if anyone inside the U.S. was in contact with that phone number in Yemen. Only after 9/11 did we learn that al-Mihdhar, while living in San Diego, had called the safe house.

In congressional testimony in June, FBI Director Bob Mueller said that if intelligence officials had had the NSA’s searchable database of U.S. telephone-call records before 9/11, they would have been able to connect the number to al-Mihdhar and produce actionable intelligence on participants of the developing plot. NSA Director Keith Alexander testified before Congress in October that if the call-records program had existed before 9/11, there is a “very high” likelihood that we would have detected the impending attack that killed 3,000 Americans.

The most obvious problem with this argument is that the court order we’ve seen for phone records explicitly demands two distinct categories of records, for calls “(i) between the United States and abroad, or (ii) wholly within the United States, including local telephone calls.” The first category might have helped identify calls to or from a known safehouse in Yemen, but the latter, much larger category rather obviously would not.  This is simply an attempt to exploit the tragedy of 9/11 to deflect criticism of massive domestic surveillance that would not have been any use in preventing that attack.

Facebook Opens Takedown Hotline for Public School Officials

was critical earlier this year when lawmakers in my home state of Maryland enacted “Grace’s Law,” purporting to ban so-called cyberbullying — in this case, the use of hurtful online language as part of a course of conduct that inflicts serious emotional distress or harassment on a Maryland juvenile, apparently whether or not the speaker knows that the person distressed by the speech is a Maryland juvenile. I predicted that the law would run into trouble in the courts for infringing on much speech protected by the First Amendment.

On Tuesday, the new law took effect, and this morning Maryland attorney general Douglas Gansler unveiled a joint initiative with Facebook and the National Association of Attorneys General (NAAG) in which Facebook will create a new program for school officials, the Educator Escalation Channel — initially limited to use in the state of Maryland, presumably pending similar enactments elsewhere — allowing the officials to object to Facebook users’ content. Per local radio station WTOP, Maryland school officials will be offered the chance to flag “questionable or prohibited” language. That is to say, they will flag speech that isn’t prohibited by the new law but which they deem “questionable.”

The targets of the new program, according to Gansler as quoted by WTOP, include persons who are “not committing a crime… We’re not going to go after you, but we are going to take down the language off of Facebook, because there’s no redeeming societal value and it’s clearly hurting somebody.” That is to say, Gansler believes he has negotiated power for school officials to go after speech that is not unlawful even under the decidedly speech-unfriendly definitions of the new Maryland law, but which they consider hurtful and lacking in “redeeming societal value.”

Already, defenders of the new program are arguing that there’s no problem here, because Facebook as a private entity is free voluntarily to put whatever terms it wants to into its user agreement and enforce them however it likes. Of course, private companies deal voluntarily with a group of state enforcers like the NAAG only in the sense that you or I deal voluntarily with the Internal Revenue Service.

Can we now finally start taking the First Amendment implications of these laws seriously?

The Government Shutdown on the Web

If you’ve tried to reach a government site today, you may have noticed that the “shutdown” applies to the virtual homes and social media accounts of federal agencies no less than their brick-and-mortar offices… at least some them. It’s a bit hard to make sense of why some sites remain up (some with a “no new updates” banner) while others are redirected to a shutdown notice page—and in many cases it’s puzzling why a shutdown would be necessary at all. With the offices closed, you might not have personnel on hand to add new content or other updates, but is pulling the existing content down strictly necessary?  

For agencies that directly run their own Web sites on in-house servers, shutting down might make sense if the agency’s “essential” and “inessential” systems are suitably segregated. Running the site in those cases eats up electricity and bandwidth that the agency is paying for, not to mention the IT and security personnel who need to monitor the site for attacks and other problems. Fair enough in those cases. But those functions are, at least in the private sector, often outsourced and paid for up front: if you’ve contracted with an outside firm to host your site, shutting it down for a few days or weeks may not save any money at all. And that might indeed explain why some goverment sites remain operational, even though they don’t exactly seem “essential,” while others have been pulled down.

That doesn’t seem to account for some of the weird patterns we see, however. The main page at NASA.gov redirects to a page saying the site is unavailable, but lots of subdomains that, however cool, seem “inessential” remain up and running: the “Solar System Exploration” page at solarsystem.nasa.gov; the Climate Kids website at climatekids.nasa.gov; and the large photo archive at images.jsc.nasa.gov, to name a few. There are any number of good reasons some of those subdomains might be hosted separately, and therefore unaffected by the shutdown—but it seems odd they can keep all of these running without additional expenditures, yet aren’t able to redirect to a co-located mirror of the landing page. 

Are Internet Backbone Pen Registers Constitutional?

Between ongoing publication of Edward Snowden’s leaks and a series of frankly unprecedented disclosures by the government itself, the public now knows quite a bit about the NSA’s controversial telephony metadata program, which makes use of the Patriot Act’s §215 to collect, in bulk, nearly all Americans’ domestic call detail records from telephone carriers. We know far less, however about the government’s bulk collection of Internet metadata under FISA’s pen register/trap-&-trace authority, which supposedly ceased in 2011—though some such collection almost certainly continues in a more limited form. That collection merits closer attention, because the legal argument that bulk metadata acquisition doesn’t violate the Fourth Amendment—rehearsed in a recent post at Just Security by Orin Kerr—simply doesn’t work for acquisition of Internet metadata at the backbone, for technical reasons that it’s not at all clear the Foreign Intelligence Surveillance Court has considered.

The FISC’s recently declassified memorandum opinion authorizing bulk telephony metadata collection contains a dismayingly cursory Fourth Amendment analysis resting on the now-familiar reasoning of Smith v. Maryland: Users voluntarily convey phone dialing information to a “third party” (i.e., the phone company), knowing that information will be retained in the company’s records for routine business purposes. They thereby “assume the risk” that these records will be shared with the government—notwithstanding any contrary promises of confidentiality—and so waive their Fourth Amendment expectation of privacy in that information. This is the so-called “third party doctrine.” The ruling in Smith has been widely and justly condemned—and as Jennifer Granick has ably argued, is of dubious relevance to NSA’s bulk collection program anyway. But let’s pretend for the moment, strictly arguendo, that this reasoning is not crazy on its face.

To Administer the Fourth Amendment, Recognize Reasonable Searches and Seizures

Over the last few years, I’ve dedicated more and more effort to righting the Fourth Amendment, which has been weakened over decades by doctrines that don’t measure up to the times.

You can see my efforts and their evolution in my American University Law Review article, “Reforming Fourth Amendment Privacy Doctrine” (2008); Cato’s brief to the Supreme Court in U.S. v. Jones (Oct. 2011), Cato’s brief to the Supreme Court in Florida v. Jardines (July 2012); my Cato Supreme Court Review article,Escaping Fourth Amendment Doctrine After Jones: Physics, Law, and Privacy Protection (Sept. 2012); my Cato Policy Report article, “U.S. v. Jones: Fourth Amendment Law at a Crossroads” (Sept./Oct. 2012); and, most recently, Cato’s brief to the Supreme Court in In re: EPIC (August 2013).

Today, I had the opportunity to expound on my thinking at a National Press Club event hosted by the Electronic Privacy Information Center to discuss their challenge to the National Security Agency’s bulk telephone data collection. Moderator Jeffrey Rosen, recently named President and CEO of the National Constitution Center, alloted me a good deal of time, and we discussed things a little more after the session. I’m ever-sharpening my thinking about how the Fourth Amendment should operate, and how to talk about it.

The starting point is this: The “reasonable expectation of privacy” doctrine, which grew out of Katz v. United States (1967), is a failure. Courts almost never actually investigate whether a subjective “expectation of privacy” is objectively reasonable, and they’re in no position to make broad societal pronouncements on the latter question anyway. The doctrine is not a product of the Katz majority, it’s worth noting, which focused on the steps Katz had taken to conceal the sound of his voice—steps upended by government agents’ placement of a bug in a phone booth without a warrant.

The Fourth Amendment should be administered as a law once again. To administer a law protecting “[t]he right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures,” you’d ask four questions:

Secrecy Is Delegation of Power

With allegations (and denials) of economic espionage and reports of broad access to cell phone data joining last week’s blockbuster revelation that the National Security Agency has worked to undermine encryption, it’s hard to keep up.

But Julian had it right on the jaw-dropping encryption news in his post last week, “NSA’s War on Global Cybersecurity.” A national-security-aimed attack on encryption systems that protect all our communications and data—our financial transactions, privileged communications with attorneys, medical records, and more—is like publishing faulty medical research just to prevent a particular foreign dictator from being cured. It is penny-wise and pound-foolish. It had been looking to me for a while like the U.S. government may be hoarding vulnerabilities and cultivating new attacks rather than contributing to worldwide security by helping to close gaps in vulnerable technologies. And now we have the proof.

Shane Harris’s excellent Foreign Policy article today looks at NSA administrator General Keith Alexander, calling him “The Cowboy of the NSA.” Fast and loose with the law, his folksy demeanor has allowed him to downplay the significance of his efforts. Meanwhile, Alexander and his “mad scientist” advisor James Heath have done anything they want—and lobbied for it adroitly—awash in taxpayer money. Harris reports:

When he was running the Army’s Intelligence and Security Command, Alexander brought many of his future allies down to Fort Belvoir for a tour of his base of operations, a facility known as the Information Dominance Center. It had been designed by a Hollywood set designer to mimic the bridge of the starship Enterprise from Star Trek, complete with chrome panels, computer stations, a huge TV monitor on the forward wall, and doors that made a “whoosh” sound when they slid open and closed. Lawmakers and other important officials took turns sitting in a leather “captain’s chair” in the center of the room and watched as Alexander, a lover of science-fiction movies, showed off his data tools on the big screen.

And:

“He moved fairly fast and loose with money and spent a lot of it,” [a] retired officer says. “He doubled the size of the Information Dominance Center and then built another facility right next door to it. They didn’t need it. It’s just what Heath and Alexander wanted to do.” The Information Operations Center, as it was called, was underused and spent too much money, says the retired officer. “It’s a center in search of a customer.”

I find myself nonplussed by the glib reaction of some conservatives to this wanton bureaucratic behavior. Cracking the encryption systems that protect us all cannot be waved off as “the task we’ve given the NSA.” So I offer this framework for thinking about the NSA and its behavior: Secrecy is a delegation of power from elected officials to unaccountable bureaucrats.

This is not to deny that there is some need for secrecy sometimes, but, at the scope we’ve seen, secrecy has the same, and worse, effects as other delegations of power that conservatives and libertarians object to.

NSA’s War on Global Cybersecurity

In its myopic quest to ensure that no digital communication remains hidden from its panoptic gaze, the National Security Agency has worked to undermine the security of all Internet users, a new story in the New York Times reveals. As security expert Bruce Schneier aptly summarizes the report, “Government and industry have betrayed the internet, and us.” 

In this case, the Times notes, the NSA has not just arrogated power to itself in secret, but has done so after unambiguously losing an extended public political debate in the 1990s over whether the government should be legally provided with backdoor access to encrypted communications, or attempt to prevent strong encryption software from being available to users around the world. As security experts understood, and successfully argued at the time, ensuring that companies and individual users around the world could trust the security of their communications was vastly more important than ensuring the NSA or FBI would never encounter a message they couldn’t decipher—something that, in any event, would be impossible to guarantee.

Having justly lost the public debate, the NSA secretly decided to sacrifice the rest of the world’s interests to its own goals anyway:

According to an intelligence budget document leaked by Mr. Snowden, the N.S.A. spends more than $250 million a year on its Sigint Enabling Project, which “actively engages the U.S. and foreign IT industries to covertly influence and/or overtly leverage their commercial products’ designs” to make them “exploitable.” Sigint is the acronym for signals intelligence, the technical term for electronic eavesdropping. […]

Simultaneously, the N.S.A. has been deliberately weakening the international encryption standards adopted by developers. One goal in the agency’s 2013 budget request was to “influence policies, standards and specifications for commercial public key technologies,” the most common encryption method.

Cryptographers have long suspected that the agency planted vulnerabilities in a standard adopted in 2006 by the National Institute of Standards and Technology and later by the International Organization for Standardization, which has 163 countries as members.

Classified N.S.A. memos appear to confirm that the fatal weakness, discovered by two Microsoft cryptographers in 2007, was engineered by the agency. The N.S.A. wrote the standard and aggressively pushed it on the international group, privately calling the effort “a challenge in finesse.”