Tag: Patriot Act

Patriot Act Update

It looks as though we’ll be getting a straight one-year reauthorization of the expiring provisions of the Patriot Act, without even the minimal added safeguards for privacy and civil liberties that had been proposed in the Senate’s watered down bill.  This is disappointing, but was also eminently predictable: Between health care and the economy, it was clear Congress wasn’t going to make time for any real debate on substantive reform of surveillance law. Still, the fact that the reauthorization is only for one year suggests that the reformers plan to give it another go—though, in all probability, we won’t see any action on this until after the midterm elections.

The silver lining here is that this creates a bit of breathing room, and means legislators may now have a chance to take account of the absolutely damning Inspector General’s report that found that the FBI repeatedly and systematically broke the law by exceeding its authorization to gather information about people’s telecommunications activities. It also means the debate need not be contaminated by the panic over the Fort Hood shootings or the failed Christmas bombing—neither of which have anything whatever to do with the specific provisions at issue here, but both of which would have doubtless been invoked ad nauseam anyway.

Retroactive Surveillance Immunity, Obama Style

There’s a lot to unpack in the Office of the Inspector General’s blistering 300-page report on illegal FBI abuse of surveillance authority issued last month, but I want to highlight one especially worrisome aspect, about which I spoke with The Atlantic’s Marc Ambinder earlier today.

The very short version of the report’s background finding is that, for several years, analysts at the FBI blithely and illegally circumvented even the minimal checks on their power to demand telephone records under the PATRIOT Act. I’ll go into this further in a future post, but there are strong indicators that the agents involved knew they were doing something shady. Thousands of records were obtained using a basically made-up process called an “exigent letter” wherein they ask for records with what amounts to an IOU promising legitimate legal process any day now. (In many of those cases, the legitimate legal process would not actually have been available for the records obtained.) Still more disturbing, an unknown number of records were obtained without even this fictitious process: Agents simply made informal requests verbally, by e-mail, or via post-it note. And hey, why bother with subponeas or National Security Letters when you can just slap a sticky on someone’s monitor?

Treated to a preview of the OIG’s damning conclusions, the FBI was eager to find some way to cover its massive lawbreaking. So they apparently crafted a novel legal theory after the fact, in hopes of finding some way to shoehorn their actions into federal privacy statutes.  On January 8—as in four weeks ago, years after the conduct occurred—the Office of Legal Counsel seems to have blessed the FBI’s theory, which unfortunately remains secret.  Democratic Sens. Russ Feingold, Dick Durbin, and Ron Wyden have asked the Justice Department for details, but at present we just don’t know what kind of loopholes DOJ believes exist in the law meant to protect our sensitive calling records.

Communications records are generally protected by Chapter 121 of Title 18, known to its buddies as the Stored Communications Act. The few snippets of unredacted material in the OIG report suggest that the FBI’s argument is that the statute does not apply to certain classes of call records. Presumably, the place to look for the loophole is in §2702, which governs voluntary disclosures by telecom firms.  There is, of course, an exemption for genuine emergencies—imminent threats to life and limb—but these, we know, are not at issue here because most of the records were not sought in emergency situations. But there are a number of other loopholes. The statute governs companies providing electronic communications services “to the public”—which encompasses your cell company and your ISP, but probably not the internal networks of your university or employer. The activity at issue here, however, involved the major telecom carriers, so that’s probably not it. There’s another carve-out for records obtained with the consent of the subscriber, which might cover certain government employees who’ve signed off on surveillance as a condition of employment. We do know that in some cases, the records obtained had to do with leak investigations, but that doesn’t seem especially likely either, since the FBI claims (though the OIG expresses its doubts about the veracity of the claim) that the justification would apply to the “majority” of records obtained.

My current best guess, based on what little we know, is this. The SCA refers to, and protects from disclosure to any “government entity,” the records of “customers” and “subscribers.”  But telecommunications firms may often have records about the calling activity of people who are not the customers or subscribers of that company. For example, reciprocal agreements between carriers will often permit a phone that’s signed up with one cell provider to make use of another company’s network while roaming. When these outside phones register on a network, that information goes to a database called the Visitor Location Register. You could imagine a clever John Yoo type arguing that the SCA does not cover information in the VLR, since it does not constitute a “subscriber” or “customer” record. Of course, it beggars belief to think that Congress intended to allow such a loophole—or, indeed, had even considered such technical details of cell network architecture.

My guess, to be sure, could be wrong. But that just points to the larger problem: The Justice Department believes that some very clever lawyerly reading of the privacy statutes—so very clever that despite the rampant “creativity” of the Bush years, they only just came up with it a few weeks ago—permits the FBI to entirely circumvent all the elaborate systems of checks and balances in place (or so we thought) to protect our calling records. If investigators can write themselves secret exemptions from the clear intent of the law, then all the ongoing discussion about reform and reauthorization of the PATRIOT Act amounts to a farcical debate about where to place the fortifications along the Maginot Line.

Thursday Links

  • Nat Hentoff: If you’re looking for reform in Cuba, don’t rest your hopes on Raul Castro.
  • Tim Carney, author of Obamanomics: How Barack Obama Is Bankrupting You and Enriching His Wall Street Friends, Corporate Lobbyists, and Union Bosses gives the inside scoop on why big government is good for big business.

Colbert Report on PATRIOT & Private Spying

Stephen Colbert tackles both Obama’s flip-flop on the PATRIOT Act (“When presidents take office they learn a secret… Unlimited power is awesome!”) and the private sector’s complicity in the growth of the surveillance state—drawing heavily on the invaluable work of Chris Soghoian.

The Colbert Report Mon - Thurs 11:30pm / 10:30c
The Word - Spyvate Sector
www.colbertnation.com
Colbert Report Full Episodes Political Humor U.S. Speedskating

Tuesday Links

  • Why the Supreme Court should strike down the Public Company Accounting Oversight Board: “Imagine a government agency with the authority to create and enforce laws, prosecute and adjudicate violations, and impose criminal penalties. Then throw in the power to levy taxes to pay for all the above. And for good measure, make the agency independent of political oversight.”

A Preemptive Word on “Lone Wolves”

As Marcy Wheeler notes, the press seem to have settled on the term “lone wolf” to describe Fort Hood gunman Nidal Malik Hasan, which means it’s probably only a matter of time before we encounter a pundit or legislator who is cynical or befuddled enough (or both) to invoke the tragedy in defense of the PATRIOT Act’s constitutionally dubious Lone Wolf provision. (A “matter of time” apparently meaning the time it took me to write that sentence: We have a winner!) Though the Senate Judiciary Committee has approved a bill that would renew the measure, their counterparts in the House wisely—though narrowly—voted to permit it to expire last week.

To spare anyone tempted by this argument some embarrassment: The Lone Wolf provision is totally irrelevant to this case. It could not have been used to investigate Hasan, nor would it have been necessary.

The Lone Wolf provision permits the targeting of non-U.S. persons when there is probable cause to believe they’re preparing to engage in acts of international terrorism. Even if we assume the statutory definition of “international terrorism” could be stretched to cover the Fort Hood attack—and perhaps it could—the provision would have been inapplicable to the Virginia–born Hasan.

So were investigators powerless? Of course not. PATRIOT’s Lone Wolf clause relates only to whether the tools available under the Foreign Intelligence Surveillance Act can be invoked. Shooting people, however, is a crime even when committed for reasons having nothing to do with jihad, and the standard for obtaining a warrant—probable cause—is the same. The chief advantage of FISA tools is that they tend to be both highly secret and, in certain respects, broader than criminal investigative tools—features that are vital when dealing with trained terror agents who are working with an international network it’s important not to tip off, but not so much for “lone wolves,” who by definition lack any such network.

In fact, though, even if the most ambitious reforms proposed by Democrats had been in place, PATRIOT powers could have been brought to bear on Hasan had investigators chosen to do so. We are told, for instance, that investigators months ago became aware of Hasan’s efforts to contact al-Qaeda affiliates abroad. That alone would have provided grounds—again, under current law and under the most civil-liberties protective modifications being considered—for the issuance of National Security Letters seeking his financial and telecommunications records.

The truth is that the Lone Wolf provision didn’t help—and couldn’t have helped—stop this “lone wolf.” Indeed, it’s hard to imagine what additional powers would have been useful here given what it seems investigators already knew. As our recent history makes all too clear, what typically makes the difference between intelligence success and failure is not how much information you can get, at least past a certain point, but knowing what to do with the information you’ve got. But of course, that’s difficult to do, and doesn’t tend to be the kind of thing that can be fixed with a couple crude statutory provision you can brag about in press releases to your constituents.  So pundits and legislators see a delicate information processing system failing to flag the right targets and conclude, every time, that the right solution is more juice! Turn up the voltage! Try that troubleshooting strategy with your laptop sometime and let me know how it works out.

Some Thoughts on the New Surveillance

Last night I spoke at “The Little Idea,” a mini-lecture series launched in New York by Ari Melber of The Nation and now starting up here in D.C., on the incredibly civilized premise that, instead of some interminable panel that culminates in a series of audience monologues-disguised-as-questions, it’s much more appealing to have a speaker give a ten-minute spiel, sort of as a prompt for discussion, and then chat with the crowd over drinks.

I’d sketched out a rather longer version of my remarks in advance just to make sure I had my main ideas clear, and so I’ll post them here, as a sort of preview of a rather longer and more formal paper on 21st century surveillance and privacy that I’m working on. Since ten-minute talks don’t accommodate footnotes very well, I should note that I’m drawing for a lot of these ideas on the excellent work of legal scholars Lawrence Lessig and Daniel Solove (relevant papers at the links). Anyway, the expanded version of my talk after the jump:

Since this is supposed to be an event where the drinking is at least as important as the talking, I want to begin with a story about booze—the story of a guy named Roy Olmstead.  Back in the days of Prohibition, Roy Olmstead was the youngest lieutenant on the Seattle police force. He spent a lot of his time busting liquor bootleggers, and in the course of his duties, he had two epiphanies. First, the local rum runners were disorganized—they needed a smart kingpin who’d run the operation like a business. Second, and more importantly, he realized liquor smuggling paid a lot better than police work.

So Roy Olmstead decided to change careers, and it turned out he was a natural. Within a few years he had remarried to a British debutante, bought a big white mansion, and even ran his own radio station—which he used to signal his ships, smuggling hooch down from Canada, via coded messages hidden in broadcasts of children’s bedtime stories. He did retain enough of his old ethos, though, that he forbade his men from carrying guns. The local press called him the Bootleg King of Puget Sound, and his parties were the hottest ticket in town.

Roy’s success did not go unnoticed, of course, and soon enough the feds were after him using their own clever high-tech method: wiretapping. It was so new that they didn’t think they needed to get a court warrant to listen in on phone conversations, and so when the hammer came down, Roy Olmstead challenged those wiretaps in a case that went all the way to the Supreme Court—Olmstead v. U.S.

The court had to decide whether these warrantless wiretaps had violated the Fourth Amendment “right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures.” But when the court looked at how a “search” had traditionally been defined, they saw that it was tied to the common law tort of trespass. Originally, that was supposed to be your remedy if you thought your rights had been violated, and a warrant was a kind of shield against a trespass lawsuit. So the majority didn’t see any problem: “There was no search,” they wrote, “there was no seizure.” Because a search was when the cops came on to your property, and a seizure was when they took your stuff. This was no more a search than if the police had walked by on the sidewalk and seen Roy unpacking a crate of whiskey through his living room window: It was just another kind of non-invasive observation.

So Olmstead went to jail, and came out a dedicated evangelist for Christian Science. It wasn’t until the year after Olmstead died, in 1967, that the Court finally changed its mind in a case called Katz v. U.S.: No, they said, the Fourth Amendment protects people and not places, and so instead of looking at property we’re going to look at your reasonable expectation of privacy, and on that understanding, wiretaps are a problem after all.

So that’s a little history lesson—great, so what? Well, we’re having our own debate about surveillance as Congress considers not just reauthorization of some expiring Patriot Act powers, but also reform of the larger post-9/11 surveillance state, including last year’s incredibly broad amendments to the Foreign Intelligence Surveillance Act. And I see legislators and pundits repeating two related types of mistakes—and these are really conceptual mistakes, not legal mistakes—that we can now, with the benefit of hindsight, more easily recognize in the logic of Olmstead: One is a mistake about technology; the other is a mistake about the value of privacy.

First, the technology mistake. The property rule they used in Olmstead was founded on an assumption about the technological constraints on observation. The goal of the Fourth Amendment was to preserve a certain kind of balance between individual autonomy and state power. The mechanism for achieving that goal was a rule that established a particular trigger or tripwire that would, in a sense, activate the courts when that boundary was crossed in order to maintain the balance. Establishing trespass as the trigger made sense when the sphere of intimate communication was coextensive with the boundaries of your private property. But when technology decoupled those two things, keeping the rule the same no longer preserved the balance, the underlying goal, in the same way, because suddenly you could gather information that once required trespass without hitting that property tripwire.

The second and less obvious error has to do with a conception of the value of privacy, and a corresponding idea of what a privacy harm looks like.  You could call the Olmstead court’s theory “Privacy as Seclusion,” where the paradigmatic violation is the jackboot busting down your door and disturbing the peace of your home. Wiretapping didn’t look like that, and so in one sense it was less intrusive—invisible, even. In another sense, it was more intrusive because it was invisible: Police could listen to your private conversations for months at a time, with you none the wiser. The Katz court finally understood this; you could call their theory Privacy as Secrecy, where the harm is not intrusion but disclosure.

But there’s an even less obvious potential harm here. If they didn’t need a warrant, everyone who made a phone call would know that they could whenever they felt like it. Wiretapping is expensive and labor intensive enough that realistically they can only be gathering information about a few people at a time.  But if further technological change were to remove that constraint, then the knowledge of the permanent possibility of surveillance starts having subtle effects on people’s behavior—if you’ve seen the movie The Lives of Others you can see an extreme case of an ecology of constant suspicion—and that persists whether or not you’re actually under surveillance.  To put it in terms familiar to Washingtonians: Imagine if your conversations had to be “on the record” all the time. Borrowing from Michel Foucault, we can say the privacy harm here is not (primarily) invasion or disclosure but discipline. This idea is even embedded in our language: When we say we want to control and discipline these police powers, we talk about the need for over-sight and super-vision, which are etymologically basically the same word as sur-veillance.

Move one more level from the individual and concrete to the abstract and social harms, and you’ve got the problem (or at least the mixed blessing) of what I’ll call legibility. The idea here is that the longer term possibilities of state control—the kinds of power that are even conceivable—are determined in the modern world by the kind and quantity of information the modern state has, not about discrete individuals, but about populations.  So again, to reach back a few decades, the idea that maybe it would be convenient to round up all the Americans of Japanese ancestry—or some other group—and put them in internment camps is just not even on the conceptual menu unless you have a preexisting informational capacity to rapidly filter and locate your population that way.

Now, when we talk about our First Amendment right to free speech, we understand it has a certain dual character: That there’s an individual right grounded in the equal dignity of free citizens that’s violated whenever I’m prohibited from expressing my views. But also a common or collective good that is an important structural precondition of democracy. As a citizen subject to democratic laws, I have a vested interest in the freedom of political discourse whether or not I personally want to say–or even listen to–controversial speech. Looking at the incredible scope of documented intelligence abuses from the 60s and 70s, we can add that I have an interest in knowing whether government officials are trying to silence or intimidate inconvenient journalists, activists, or even legislators. Censorship and arrest are blunt tactics I can see and protest; blackmail or a calculated leak that brings public disgrace are not so obvious. As legal scholar Bill Stuntz has argued, the Founders understood the structural value of the Fourth Amendment as a complement to the First, because it is very hard to make it a crime to pray the wrong way or to discuss radical politics if the police can’t arbitrarily see what people are doing or writing in their homes.

Now consider how we think about our own contemporary innovations in search technology. The marketing copy claims PATRIOT and its offspring “update” investigative powers for the information age—but what we’re trying to do is stretch our traditional rules and oversight mechanisms to accommodate search tools as radically novel now as wiretapping was in the 20s. On the traditional model, you want information about a target’s communications and conduct, so you ask a judge to approve a method of surveillance, using standards that depend on how intrusive the method is and how secret and sensitive the information is. Constrained by legal rulings from a very different technological environment, this model assumes that information held by third parties—like your phone or banking or credit card information—gets very little protection, since it’s not really “secret” anymore. And the sensitivity of all that information is evaluated in isolation, not in terms of the story that might emerge from linking together all the traces we now inevitable leave in the datasphere every day.

The new surveillance typically seeks to observe information about conduct and communications in order to identify targets. That may mean using voiceprint analysis to pull matches for a particular target’s voice or a sufficiently unusual regional dialect in a certain area. It may mean content analysis to flag e-mails or voice conversations containing known terrorist code phrases. It may mean social graph analysis to reidentify targets who have changed venues by their calling patterns.  If you’re on Facebook, and a you and bunch of your friends all decide to use fake names when you sign up for Twitter, I can still reidentify you given sufficient computing power and strong algorithms by mapping the shape of the connections between you—a kind of social fingerprinting. It can involve predictive analysis based on powerful electronic “classifiers” that extract subtle patterns of travel or communication or purchases common to past terrorists in order to write their own algorithms for detecting potential ones.

Bracket for the moment whether we think some or all of these methods are wise.  It should be crystal clear that a method of oversight designed for up front review and authorization of target-based surveillance is going to be totally inadequate as a safeguard for these new methods.  It will either forbid them completely or be absent from the parts of the process where the dangers to privacy exist. In practice what we’ve done is shift the burden of privacy protection to so-called “minimization” procedures that are meant to archive or at least anonymize data about innocent people. But those procedures have themselves been rendered obsolete by technologies of retrieval and reidentification: No sufficiently large data set is truly anonymous.

And realize the size of the data sets we’re talking about. The FBI’s Information Data Warehouse holds at least 1.5 billion records, and growing fast, from an array of private and government sector sources—some presumably obtained using National Security Letters and Patriot 215 orders, some by other means. Those NSLs are issued by the tens of thousands each year, mostly for information about Americans.  As of 2006, we know “some intelligence sources”—probably NSA’s—were  growing at a rate of 4 petabytes, that’s 4 million Gigabytes—each month.  Within about five years, NSA’s archive is expected to be measured in Yottabytes—if you want to picture one Yottabyte, take the sum total of all data on the Internet—every web page, audio file, and video—and multiply it by 2,000. At that point they will have to make up a new word for the next largest unit of data.  As J. Edgar Hoover understood all too well, just having that information is a form of power. He wasn’t the most feared man in Washington for decades because he necessarily had something on everyone—though he had a lot—but because he had so much that you really couldn’t be sure what he had on you.

There is, to be sure, a lot to be said against the expansion of surveillance powers over the past eight years from a more conventional civil liberties perspective.  But we also need to be aware that if we’re not attuned to the way new technologies may avoid our would tripwires, if we only think of privacy in terms of certain familiar, paradigmatic violations—the boot in the door—then like the Olmstead court, we may render ourselves blind to equally serious threats that don’t fit our mental picture of a privacy harm.

If we’re going to avoid this, we need to attune ourselves to the ways modern surveillance is qualitatively different from past search tools, even if words like “wiretap” and “subpoena” remain the same. And we’re going to need to stop thinking only in terms of isolated violations of individual rights, but also consider the systemic and structural effects of the architectures of surveillance we’re constructing.

Topics: