Tag: surveillance

Some Thoughts on the New Surveillance

Last night I spoke at “The Little Idea,” a mini-lecture series launched in New York by Ari Melber of The Nation and now starting up here in D.C., on the incredibly civilized premise that, instead of some interminable panel that culminates in a series of audience monologues-disguised-as-questions, it’s much more appealing to have a speaker give a ten-minute spiel, sort of as a prompt for discussion, and then chat with the crowd over drinks.

I’d sketched out a rather longer version of my remarks in advance just to make sure I had my main ideas clear, and so I’ll post them here, as a sort of preview of a rather longer and more formal paper on 21st century surveillance and privacy that I’m working on. Since ten-minute talks don’t accommodate footnotes very well, I should note that I’m drawing for a lot of these ideas on the excellent work of legal scholars Lawrence Lessig and Daniel Solove (relevant papers at the links). Anyway, the expanded version of my talk after the jump:

Since this is supposed to be an event where the drinking is at least as important as the talking, I want to begin with a story about booze—the story of a guy named Roy Olmstead.  Back in the days of Prohibition, Roy Olmstead was the youngest lieutenant on the Seattle police force. He spent a lot of his time busting liquor bootleggers, and in the course of his duties, he had two epiphanies. First, the local rum runners were disorganized—they needed a smart kingpin who’d run the operation like a business. Second, and more importantly, he realized liquor smuggling paid a lot better than police work.

So Roy Olmstead decided to change careers, and it turned out he was a natural. Within a few years he had remarried to a British debutante, bought a big white mansion, and even ran his own radio station—which he used to signal his ships, smuggling hooch down from Canada, via coded messages hidden in broadcasts of children’s bedtime stories. He did retain enough of his old ethos, though, that he forbade his men from carrying guns. The local press called him the Bootleg King of Puget Sound, and his parties were the hottest ticket in town.

Roy’s success did not go unnoticed, of course, and soon enough the feds were after him using their own clever high-tech method: wiretapping. It was so new that they didn’t think they needed to get a court warrant to listen in on phone conversations, and so when the hammer came down, Roy Olmstead challenged those wiretaps in a case that went all the way to the Supreme Court—Olmstead v. U.S.

The court had to decide whether these warrantless wiretaps had violated the Fourth Amendment “right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures.” But when the court looked at how a “search” had traditionally been defined, they saw that it was tied to the common law tort of trespass. Originally, that was supposed to be your remedy if you thought your rights had been violated, and a warrant was a kind of shield against a trespass lawsuit. So the majority didn’t see any problem: “There was no search,” they wrote, “there was no seizure.” Because a search was when the cops came on to your property, and a seizure was when they took your stuff. This was no more a search than if the police had walked by on the sidewalk and seen Roy unpacking a crate of whiskey through his living room window: It was just another kind of non-invasive observation.

So Olmstead went to jail, and came out a dedicated evangelist for Christian Science. It wasn’t until the year after Olmstead died, in 1967, that the Court finally changed its mind in a case called Katz v. U.S.: No, they said, the Fourth Amendment protects people and not places, and so instead of looking at property we’re going to look at your reasonable expectation of privacy, and on that understanding, wiretaps are a problem after all.

So that’s a little history lesson—great, so what? Well, we’re having our own debate about surveillance as Congress considers not just reauthorization of some expiring Patriot Act powers, but also reform of the larger post-9/11 surveillance state, including last year’s incredibly broad amendments to the Foreign Intelligence Surveillance Act. And I see legislators and pundits repeating two related types of mistakes—and these are really conceptual mistakes, not legal mistakes—that we can now, with the benefit of hindsight, more easily recognize in the logic of Olmstead: One is a mistake about technology; the other is a mistake about the value of privacy.

First, the technology mistake. The property rule they used in Olmstead was founded on an assumption about the technological constraints on observation. The goal of the Fourth Amendment was to preserve a certain kind of balance between individual autonomy and state power. The mechanism for achieving that goal was a rule that established a particular trigger or tripwire that would, in a sense, activate the courts when that boundary was crossed in order to maintain the balance. Establishing trespass as the trigger made sense when the sphere of intimate communication was coextensive with the boundaries of your private property. But when technology decoupled those two things, keeping the rule the same no longer preserved the balance, the underlying goal, in the same way, because suddenly you could gather information that once required trespass without hitting that property tripwire.

The second and less obvious error has to do with a conception of the value of privacy, and a corresponding idea of what a privacy harm looks like.  You could call the Olmstead court’s theory “Privacy as Seclusion,” where the paradigmatic violation is the jackboot busting down your door and disturbing the peace of your home. Wiretapping didn’t look like that, and so in one sense it was less intrusive—invisible, even. In another sense, it was more intrusive because it was invisible: Police could listen to your private conversations for months at a time, with you none the wiser. The Katz court finally understood this; you could call their theory Privacy as Secrecy, where the harm is not intrusion but disclosure.

But there’s an even less obvious potential harm here. If they didn’t need a warrant, everyone who made a phone call would know that they could whenever they felt like it. Wiretapping is expensive and labor intensive enough that realistically they can only be gathering information about a few people at a time.  But if further technological change were to remove that constraint, then the knowledge of the permanent possibility of surveillance starts having subtle effects on people’s behavior—if you’ve seen the movie The Lives of Others you can see an extreme case of an ecology of constant suspicion—and that persists whether or not you’re actually under surveillance.  To put it in terms familiar to Washingtonians: Imagine if your conversations had to be “on the record” all the time. Borrowing from Michel Foucault, we can say the privacy harm here is not (primarily) invasion or disclosure but discipline. This idea is even embedded in our language: When we say we want to control and discipline these police powers, we talk about the need for over-sight and super-vision, which are etymologically basically the same word as sur-veillance.

Move one more level from the individual and concrete to the abstract and social harms, and you’ve got the problem (or at least the mixed blessing) of what I’ll call legibility. The idea here is that the longer term possibilities of state control—the kinds of power that are even conceivable—are determined in the modern world by the kind and quantity of information the modern state has, not about discrete individuals, but about populations.  So again, to reach back a few decades, the idea that maybe it would be convenient to round up all the Americans of Japanese ancestry—or some other group—and put them in internment camps is just not even on the conceptual menu unless you have a preexisting informational capacity to rapidly filter and locate your population that way.

Now, when we talk about our First Amendment right to free speech, we understand it has a certain dual character: That there’s an individual right grounded in the equal dignity of free citizens that’s violated whenever I’m prohibited from expressing my views. But also a common or collective good that is an important structural precondition of democracy. As a citizen subject to democratic laws, I have a vested interest in the freedom of political discourse whether or not I personally want to say–or even listen to–controversial speech. Looking at the incredible scope of documented intelligence abuses from the 60s and 70s, we can add that I have an interest in knowing whether government officials are trying to silence or intimidate inconvenient journalists, activists, or even legislators. Censorship and arrest are blunt tactics I can see and protest; blackmail or a calculated leak that brings public disgrace are not so obvious. As legal scholar Bill Stuntz has argued, the Founders understood the structural value of the Fourth Amendment as a complement to the First, because it is very hard to make it a crime to pray the wrong way or to discuss radical politics if the police can’t arbitrarily see what people are doing or writing in their homes.

Now consider how we think about our own contemporary innovations in search technology. The marketing copy claims PATRIOT and its offspring “update” investigative powers for the information age—but what we’re trying to do is stretch our traditional rules and oversight mechanisms to accommodate search tools as radically novel now as wiretapping was in the 20s. On the traditional model, you want information about a target’s communications and conduct, so you ask a judge to approve a method of surveillance, using standards that depend on how intrusive the method is and how secret and sensitive the information is. Constrained by legal rulings from a very different technological environment, this model assumes that information held by third parties—like your phone or banking or credit card information—gets very little protection, since it’s not really “secret” anymore. And the sensitivity of all that information is evaluated in isolation, not in terms of the story that might emerge from linking together all the traces we now inevitable leave in the datasphere every day.

The new surveillance typically seeks to observe information about conduct and communications in order to identify targets. That may mean using voiceprint analysis to pull matches for a particular target’s voice or a sufficiently unusual regional dialect in a certain area. It may mean content analysis to flag e-mails or voice conversations containing known terrorist code phrases. It may mean social graph analysis to reidentify targets who have changed venues by their calling patterns.  If you’re on Facebook, and a you and bunch of your friends all decide to use fake names when you sign up for Twitter, I can still reidentify you given sufficient computing power and strong algorithms by mapping the shape of the connections between you—a kind of social fingerprinting. It can involve predictive analysis based on powerful electronic “classifiers” that extract subtle patterns of travel or communication or purchases common to past terrorists in order to write their own algorithms for detecting potential ones.

Bracket for the moment whether we think some or all of these methods are wise.  It should be crystal clear that a method of oversight designed for up front review and authorization of target-based surveillance is going to be totally inadequate as a safeguard for these new methods.  It will either forbid them completely or be absent from the parts of the process where the dangers to privacy exist. In practice what we’ve done is shift the burden of privacy protection to so-called “minimization” procedures that are meant to archive or at least anonymize data about innocent people. But those procedures have themselves been rendered obsolete by technologies of retrieval and reidentification: No sufficiently large data set is truly anonymous.

And realize the size of the data sets we’re talking about. The FBI’s Information Data Warehouse holds at least 1.5 billion records, and growing fast, from an array of private and government sector sources—some presumably obtained using National Security Letters and Patriot 215 orders, some by other means. Those NSLs are issued by the tens of thousands each year, mostly for information about Americans.  As of 2006, we know “some intelligence sources”—probably NSA’s—were  growing at a rate of 4 petabytes, that’s 4 million Gigabytes—each month.  Within about five years, NSA’s archive is expected to be measured in Yottabytes—if you want to picture one Yottabyte, take the sum total of all data on the Internet—every web page, audio file, and video—and multiply it by 2,000. At that point they will have to make up a new word for the next largest unit of data.  As J. Edgar Hoover understood all too well, just having that information is a form of power. He wasn’t the most feared man in Washington for decades because he necessarily had something on everyone—though he had a lot—but because he had so much that you really couldn’t be sure what he had on you.

There is, to be sure, a lot to be said against the expansion of surveillance powers over the past eight years from a more conventional civil liberties perspective.  But we also need to be aware that if we’re not attuned to the way new technologies may avoid our would tripwires, if we only think of privacy in terms of certain familiar, paradigmatic violations—the boot in the door—then like the Olmstead court, we may render ourselves blind to equally serious threats that don’t fit our mental picture of a privacy harm.

If we’re going to avoid this, we need to attune ourselves to the ways modern surveillance is qualitatively different from past search tools, even if words like “wiretap” and “subpoena” remain the same. And we’re going to need to stop thinking only in terms of isolated violations of individual rights, but also consider the systemic and structural effects of the architectures of surveillance we’re constructing.

Topics:

Totalitarian Leftovers in Eastern Europe

The Berlin Wall fell 20 years ago.  A hideous symbol of the suppression of liberty, it should remind us of the ever-present threat to our freedoms.  Even two decades later the legacy of repression continues to afflict many people in Eastern Europe.  For instance, those in countries formerly behind the Iron Curtain still struggle with the knowledge that their friends and neighbors routinely spied on them.

Reports the Associated Press:

Stelian Tanase found out when he asked to see the thick file that Romania’s communist-era secret police had kept on him. The revelation nearly knocked the wind out of him: His closest pal was an informer who regularly told agents what Tanase was up to.

“In a way, I haven’t even recovered today,” said Tanase, a novelist who was placed under surveillance and had his home bugged during the late dictator Nicolae Ceausescu’s regime.

“He was the one person on Earth I had the most faith in,” he said. “And I never, ever suspected him.”

Twenty years ago this autumn, communism collapsed across Eastern Europe. But its dark legacy endures in the unanswered question of the files — whether letting the victims read them cleanses old wounds or rips open new ones.

Things have never been so bad here, obviously, but that gives us even more reason to jealously guard our liberties.  Defend America we must, but we must never forget that it is a republic which we are defending.

PATRIOT Powers: Roving Wiretaps

Last week, I wrote a piece for Reason in which I took a close look at the USA PATRIOT Act’s “lone wolf” provision—set to expire at the end of the year, though almost certain to be renewed—and argued that it should be allowed to lapse. Originally, I’d planned to survey the whole array of authorities that are either sunsetting or candidates for reform, but ultimately decided it made more sense to give a thorough treatment to one than trying to squeeze an inevitably shallow gloss on four or five complex areas of law into the same space. But the Internets are infinite, so I’ve decided I’d turn the Reason piece into Part I of a continuing series on PATRIOT powers.  In this edition: Section 206, roving wiretap authority.

The idea behind a roving wiretap should be familiar if you’ve ever watched The Wire, where dealers used disposable “burner” cell phones to evade police eavesdropping. A roving wiretap is used when a target is thought to be employing such measures to frustrate investigators, and allows the eavesdropper to quickly begin listening on whatever new phone line or Internet account his quarry may be using, without having to go back to a judge for a new warrant every time. Such authority has long existed for criminal investigations—that’s “Title III” wiretaps if you want to sound clever at cocktail parties—and pretty much everyone, including the staunchest civil liberties advocates, seems to agree that it also ought to be available for terror investigations under the Foreign Intelligence Surveillance Act. So what’s the problem here?

 

To understand the reasons for potential concern, we need to take a little detour into the differences between electronic surveillance warrants under Title III and FISA. The Fourth Amendment imposes two big requirements on criminal warrants: “probable cause” and “particularity”. That is, you need evidence that the surveillance you’re proposing has some connection to criminal activity, and you have to “particularly [describe] the place to be searched and the persons or things to be seized.” For an ordinary non-roving wiretap, that means you show a judge the “nexus” between evidence of a crime and a particular “place” (a phone line, an e-mail address, or a physical location you want to bug). You will often have a named target, but you don’t need one: If you have good evidence gang members are meeting in some location or routinely using a specific payphone to plan their crimes, you can get a warrant to bug it without necessarily knowing the names of the individuals who are going to show up. On the other hand, though, you do always need that criminal nexus: No bugging Tony Soprano’s AA meeting unless you have some reason to think he’s discussing his mob activity there. Since places and communications facilities may be used for both criminal and innocent persons, the officer monitoring the facility is only supposed to record what’s pertinent to the investigation.

When the tap goes roving, things obviously have to work a bit differently. For roving taps, the warrant shows a nexus between the suspected crime and an identified target. Then, as surveillance gets underway, the eavesdroppers can go up on a line once they’ve got a reasonable belief that the target is “proximate” to a location or communications facility. It stretches that “particularity” requirement a bit, to be sure, but the courts have thus far apparently considered it within bounds. It may help that they’re not used with great frequency: Eleven were issued last year, all to state-level investigators, for narcotics and racketeering investigations.

Surveillance law, however, is not plug-and-play. Importing a power from the Title III context into FISA is a little like dropping an unfamiliar organism into a new environment—the consequences are unpredictable, and may well be dramatic. The biggest relevant difference is that with FISA warrants, there’s always a “target”, and the “probable cause” showing is not of criminal activity, but of a connection between that target and a “foreign power,” which includes terror groups like Al Qaeda. However, for a variety of reasons, both regular and roving FISA warrants are allowed to provide only a description of the target, rather than the target’s identity. Perhaps just as important, FISA has a broader definition of the “person” to be specified as a “target” than Title III. For the purposes of criminal wiretaps, a “person” means any “individual, partnership, association, joint stock company, trust, or corporation.” The FISA definition of “person” includes all of those, but may also be any “group, entity, …or foreign power.” Some, then, worry that roving authority could be used to secure “John Doe” warrants that don’t specify a particular location, phone line, or Internet account—yet don’t sufficiently identify a particular target either. Congress took some steps to attempt to address such concerns when they reauthorized Section 206 back in 2005, and other legislators have proposed further changes—which I’ll get to in a minute. But we actually need to understand a few more things about the peculiarities of FISA wiretaps to see why the risk of overbroad collection is especially high here.

In part because courts have suggested that the constraints of the Fourth Amendment bind more loosely in the foreign intelligence context, FISA surveillance is generally far more sweeping in its acquisition of information. In 2004, the FBI gathered some 87 years worth of foreign language audio recordings alone pursuant to FISA warrants. As David Kris (now assistant attorney general for the Justice Department’s National Security Division) explains in his definitive text on the subject, a FISA warrant typically “permits aquisition of nearly all information from a monitored facility or a searched location.” (This may be somewhat more limited for roving taps; I’ll return to the point shortly.) As a rare public opinion from the FISA Court put it in 2002: “Virtually all information seized, whether by electronic surveillance or physical search, is minimized hours, days, or weeks after collection.” The way this is supposed to be squared with the Fourth Amendment rights of innocent Americans who may be swept up in such broad interception is via those “minimization” procedures, employed after the fact to filter out irrelevant information.

That puts a fairly serious burden on these minimization procedures, however, and it’s not clear that they well bear it. First, consider the standard applied. The FISA Court explains that “communications of or concerning United States persons that could not be foreign intelligence information or are not evidence of a crime… may not be logged or summarized” (emphasis added). This makes a certain amount of sense: FISA intercepts will often be in unfamiliar languages, foreign agents will often speak in coded language, and the significance of a particular statement may not be clear initially. But such a deferential standard does mean they’re retaining an awful lot of data. And indeed, it’s important to recognize that “minimization” does not mean “deletion,” as the Court’s reference to “logs” and “summaries” hints. Typically intercepts that are “minimized” simply aren’t logged for easy retrieval in a database. In the 80s, this may have been nearly as good for practical purposes as deletion; with the advent of powerful audio search algorithms capable of scanning many hours of recording quickly for particular words or voices, it may not make much difference. And we know that much more material than is officially “retained” remains available to agents. In the 2003 case U.S. v. Sattar, pursuant to FISA surveillance, “approximately 5,175 pertinent voice calls .. were not minimized.”  But when it came time for the discovery phase of a criminal trial against the FISA targets, the FBI “retrieved and disclosed to the defendants over 85,000 audio files … obtained through FISA surveillance.”

Cognizant of these concerns, Congress tried to add some safeguards in 2005 when they reauthorized the PATRIOT Act. FISA warrants are still permitted to work on descriptions of a target, but the word “specific” was added, presumably to reinforce that the description must be precise enough to uniquely pick out a person or group. They also stipulated that eavesdroppers must inform the FISA Court within ten days of any new facility they eavesdrop on, and explain the “facts justifying a belief that the target is using, or is about to use, that new facility or place.”

Better, to be sure; but without access to the classified opinions of the FISA Court, it’s quite difficult to know just what this means in practice. In criminal investigations, we have a reasonable idea of what the “proximity” standard for roving taps entails. Maybe a target checks into a hotel with a phone in the room, or a dealer is observed to walk up to a pay phone, or to buy a “burner.” It is much harder to guess how the “is using or is about to use” standard will be construed in light of FISA’s vastly broader presumption of sweeping up-front acquisition. Again, we know that the courts have been satisfied to place enormous weight on after-the-fact minimization of communications, and it seems inevitable that they will do so to an even greater extent when they only learn of a new tap ten days (or 60 days with good reason) after eavesdropping has commenced.

We also don’t know how much is built into that requirement that warrants name a “specific” target, and there’s a special problem here when surveillance roves across not only facilities but types of facility. Suppose, for instance, that a FISA warrant is issued for me, but investigators have somehow been unable to learn my identity. Among the data they have obtained for their description, however, are a photograph, a voiceprint from a recording of my phone conversation with a previous target, and the fact that I work at the Cato Institute. Now, this is surely sufficient to pick me out specifically for the purposes of a warrant initially meant for telephone or oral surveillance.  The voiceprint can be used to pluck all and only my conversations from the calls on Cato’s lines. But a description sufficient to specify a unique target in that context may not be sufficient in the context of, say, Internet surveillance, as certain elements of the description become irrelevant, and the remaining threaten to cover a much larger pool of people. Alternatively, if someone has a very unusual regional dialect, that may be sufficiently specific to pinpoint their voice in one location or community using a looser matching algorithm (perhaps because there is no actual recording, or it is brief or of low quality), but insufficient if they travel to another location where many more people have similar accents.

Russ Feingold (D-WI) has proposed amending the roving wiretap language so as to require that a roving tap identify the target. In fact, it’s not clear that this quite does the trick either. First, just conceptually, I don’t know that a sufficiently precise description can be distinguished from an “identity.” There’s an old and convoluted debate in the philosophy of language about whether proper names refer directly to their objects or rather are “disguised definite descriptions,” such that “Julian Sanchez” means “the person who is habitually called that by his friends, works at Cato, annoys others by singing along to Smiths songs incessantly…” and so on.  Whatever the right answer to that philosophical puzzle, clearly for the practical purposes at issue here, a name is just one more kind of description. And for roving taps, there’s the same kind of scope issue: Within Washington, DC, the name “Julian Sanchez” probably either picks me out uniquely or at least narrows the target pool down to a handful of people. In Spain or Latin America—or, more relevant for our purposes, in parts of the country with very large Hispanic communities—it’s a little like being “John Smith.”

This may all sound a bit fanciful. Surely sophisticated intelligence officers are not going to confuse Cato Research Fellow Julian Sanchez with, say, Duke University Multicultural Affairs Director Julian Sanchez? And of course, that is quite unlikely—I’ve picked an absurdly simplistic example for purposes of illustration. But there is quite a lot of evidence in the public record to suggest that intelligence investigations have taken advantage of new technologies to employ “targeting procedures” that do not fit our ordinary conception of how search warrants work. I mentioned voiceprint analysis above; keyword searches of both audio and text present another possibility.

We also know that individuals can often be uniquely identified by their pattern of social or communicative connections. For instance, researchers have found that they can take a completely anonymized “graph” of the social connections on a site like Facebook—basically giving everyone a name instead of a number, but preserving the pattern of who is friends with whom—and then use that graph to relink the numbers to names using the data of a differentbut overlapping social network like Flickr or Twitter. We know the same can be (and is) done with calling records—since in a sense your phone bill is a picture of another kind of social network. Using such methods of pattern analysis, investigators might determine when a new “burner” phone is being used by the same person they’d previously been targeting at another number, even if most or all of his contacts have alsoswitched phone numbers. Since, recall, the “person” who is the “target” of FISA surveillance may be a “group” or other “entity,” and since I don’t think Al Qaeda issues membership cards, the “description” of the target might consist of a pattern of connections thought to reliably distinguish those who are part of the group from those who merely have some casual link to another member.

This brings us to the final concern about roving surveillance under FISA. Criminal wiretaps are always eventually disclosed to their targets after the fact, and typically undertaken with a criminal trial in mind—a trial where defense lawyers will pore over the actions of investigators in search of any impropriety. FISA wiretaps are covert; the targets typically will never learn that they occurred. FISA judges and legislators may be informed, at least in a summary way, about what surveillance was undertaken and what targeting methods were used, but especially if those methods are of the technologically sophisticated type I alluded to above, they are likely to have little choice but to defer to investigators on questions of their accuracy and specificity. Even assuming total honesty by the investigators, judges may not think to question whether a method of pattern analysis that is precise and accurate when applied (say) within a single city or metro area will be as precise at the national level, or whether, given changing social behavior, a method that was precise last year will also be precise next year. Does it matter if an Internet service initially used by a few thousands—including, perhaps, surveillance targets—comes to be embraced by millions? Precisely because the surveillance is so secretive, it is incredibly hard to know which concerns are urgent and which are not really a problem, let alone how to think about addressing the ones that merit some legislative response.

I nevertheless intend to give it a shot in a broader paper on modern surveillance I’m working on, but for the moment I’ll just say: “It’s tricky.”  What is absolutely essential to take away from this, though, is that these loose and lazy analogies to roving wiretaps in criminal investigations are utterly unhelpful in thinking about the specific problems of roving FISA surveillance. That investigators have long been using “these” powers under Title III is no answer at all to the questions that arise here. Legislators who invoke that fact as though it should soothe every civil libertarian brow are simply evading their responsibilities.

Weekend Links

Contempt of (Secret) Court?

At last week’s House Judiciary Committee hearing on the PATRIOT Act, Rep. Hank Johnson (D-GA) raised an interesting question I haven’t seen discussed much: What happens to someone who willfully violates an order of the highly secretive Foreign Intelligence Surveillance Court? (FISA)

Generally, courts have the right to enforce their own orders by finding those who disobey in contempt, and a line from a rare public version of an opinion issued by the Foreign Intelligence Surveillance Court of Review suggests that the same holds here, noting that a service provider who challenged the (now superseded) Protect America Act “began compliance under threat of civil contempt.” (There is, interestingly, some redacted text immediately following that.) Contempt proceedings normally fall to the court that issued the original order.

A finding of civil contempt will typically result in the incarceration of the offending party until they agree to comply—and on the theory that the person “holds the keys to their own cell,” because they’ll be released as soon as they fall in line, normal due process rules don’t apply here. Of course, there are ways of violating the order that make it impossible to comply after the fact, such as breaching the gag rule that prevents people from disclosing that they’ve been served with orders, or (getting extreme now) destroying the records or “tangible things” sought via a Section 215 order. In those cases, presumably, the only recourse would be criminal contempt, for which you’re supposed to be entitled to a jury trial if the penalty is “serious” and involves more than six months incarceration.

That obviously raises some interesting problems given the extraordinarily secret nature of the FISA Court. In the public version of the opinion I linked above, the name of the petitioner and all identifying details are redacted, even the ruling was released six months after it was handed down, so as to avoid tipping off targets about specific providers that have received orders.

Now, I’m going to take a leap of faith and assume we’re not at the point of “disappearing” folks off our own streets, but it is a puzzle how you’d actually carry out enforcement and penalty, if it ever came to that, consistent with the secrecy demanded in these investigations.

A Preliminary Assessment of PATRIOT Reform Bills

Hearings were held on both sides of the Hill last week to consider a trio of surveillance powers set to expire under PATRIOT Act sunset rules. But the stage is set for a much broader fight over the sweeping expansion of search and surveillance authority seen over the past eight years; the chairmen of both the House and Senate Judiciary Committees have announced their intention to use the occasion to revisit the entire edifice of post-9/11 surveillance law. Two major reform bills have already been introduced: Sen. Russ Feingold’s JUSTICE Act and Sen. Patrick Leahy’s USA PATRIOT Sunset Extension Act. Both would preserve the core of most of the new intelligence tools while strengthening oversight and introducing more robust checks against abuse or overreach. The JUSTICE Act, however, is both significantly broader in scope and frequently establishes more stringent and precisely crafted civil liberties safeguards. Most observers expect the Leahy bill to provide the basis for the legislation ultimately reported out of Judiciary, the central question being how much of JUSTICE will be incorporated into that legislation during markup later this week. While the surveillance authorities and oversight measures covered in each bill are varied and complex, it’s worth examining the differences in some detail.

Immunity

One thing to get out of the way first: Most of the press coverage I’ve seen of Feingold’s bill to date leads with the provision that would repeal the retroactive legal immunity Congress granted to telecommunications firms that participated in the National Security Agency’s program of warrantless wiretaps. During last year’s debate over reforms to the Foreign Intelligence Surveillance Act, most reporters seem to have decided that because the immunity controversy was the sexiest or the easiest aspect of the FISA amendments to explain, it was also the most important. Which is pretty much backwards. Granting retroactive immunity was a bad idea, but the repeal clause in the JUSTICE Act is (a) not terribly likely to pass, and (b) ultimately trivial compared with the need to place reasonable limits on powers that, without strong oversight, could permit large-scale spying on Americans. In a paradoxically somewhat ominous development, a separate telecom immunity bill was introduced Monday with the co-sponsorship of both Feingold and Leahy, along with Chris Dodd and Jeff Merkley. I say “ominously” because it can be read as indicating a consensus among Democratic senators to focus on the headline-friendly immunity issue at the expense of the more important safeguards on future surveillance. More hopefully, breaking it out could be a “we tried” move designed to win plaudits from allies and draw fire from enemies without letting the measure be a poison pill in the broader reform bill, where the stuff that matters ends up. Time will tell, obviously.

Lone Wolves

That aside, let’s start with the three expiring provisions, which I discussed briefly last week.  The so-called “lone wolf” provision allows the special investigative powers of FISA, which normally require a target to be an “agent of a foreign power,” to be used on non-citizens who lack any apparent affiliation to a terrorist group, but nevertheless are thought to be engaged in “international terrorism or activities in preparation therefor.” The Leahy bill would renew it; Feingold’s JUSTICE Act does not. Lone wolf authority has never been invoked, suggesting that, as yet, it has been neither subject to abuse nor particularly urgently needed. But since the statutory definition of a “lone wolf” requires evidence of criminal conduct—engagement in “international terrorism”—any case in which it would apply should also be a case where investigators would be able to obtain an ordinary Title III criminal warrant.

That seems like the more appropriate approach for some of the cases that the Justice Department apparently thinks would be covered, such as a person who “self-radicalizes” by reading terrorist Web sites. If that is the extent of the “international” connection required, the provision uncomfortably blurs the line between domestic national security investigations, for which the Fourth Amendment demands a traditional warrant, and foreign intel investigations where an array of special considerations closely linked to the actual involvement of “foreign powers” justify greater leeway for investigators.

Roving Taps

Both bills would renew FISA’s “roving wiretap” authority, which permits investigators to eavesdrop on targets without specifying a particular phone line or e-mail account in advance, in order to deal with suspects who may rapidly change communications venues in an attempt to thwart surveillance. Under FISA, however, owing to the difficulties inherent in foreign intel surveillance, the target of a warrant can be merely described rather than directly identified.  This led to worries about “John Doe” roving warrants that would contain neither the target’s name nor any particular location. Congress added some extra language in 2006, requiring the target to be “specifically described”—that is, if not a name, a precise enough description to single out a unique individual—in roving warrants, and also required after-the-fact notice of the court when surveillance “roved” to a new facility.

Given the secrecy inherent in FISA proceedings, it’s impossible to know precisely how investigators and the court have interpreted this new language, or whether it truly provides an adequate safeguard. Where the Leahy bill would renew roving as currently written, JUSTICE adds the requirement that roving warrants contain the “identity” of the target, and codifies the principle that roving taps should only be activated during periods when it is reasonable to believe the target is “proximate to” the facility. The latter language, it should be noted, may actually have the practical effect of loosening restrictions on roving taps. Even in roving cases, FISA’s minimization provisions require an evidentiary “nexus” between the target and a facility that “is being used, or about to be used” by the target. The “proximity” standard pulled across from the Title III criminal context may actually be more permissive.

215 “Tangible Thing” Orders

Last of the provisions expiring this year is authority  under section 215 of FISA, to compel the production of “any tangible thing”  from just about anyone, though it’s primarily intended to cover various kinds of business records. Under the original PATRIOT Act, this required only a certification to the secret FISA court that the records or objects sought were “relevant” to an investigation. In 2006, Congress added a requirement that applications for 215 orders include some factual showing of relevance, but many kinds of requests were deemed presumptively relevant.

Both bills tighten this up, with some minor differences. All now limit 215 orders to records pertaining to suspected agents of foreign powers, the activities of those agents, or persons known to be in direct contact with or otherwise linked to those agents. This preempts expanding friend-of-a-friend fishing expeditions where the target’s father’s brother’s nephew’s cousin’s former roommate’s colonoscopy results are potentially “relevant.” Feingold adds a “least intrusive means” requirement when the records pertain to “activities”—since in that case the presumption is that the identities of the specific targets are unknown, and the order seeks to discover them. Feingold’s bill also permits records to pertain to a “subject of an ongoing and authorized national security investigation” other than an agent of a foreign power, which would appear to broaden the scope of accessible records.

Neither bill responds to the concern raised by civil libertarians that “contact” with a suspect is too vaguely defined. Again, since we’re necessarily ignorant about precisely how courts have construed the “relevance” standard, it wouldn’t hurt to make explicit that when the records sought pertain to non-targets in “contact” with a target, there be some showing that establishes a nexus between the nature of the contact and the investigatory purpose to obtain foreign intelligence information.

National Security Letters

That covers the expiring provisions. Fortunately, both bills recognize that it would be fruitless to tighten restrictions on 215 orders without doing something to rein in the vastly more frequently used National Security Letters. An Inspector General audit found that in at least one instance, the FBI improperly used NSLs to obtain information they had previously sought under a 215 order, and which the FISA court had denied on the grounds that the investigation raised First Amendment concerns.

More generally, it’s believed that, especially after Congress imposed some restrictions on the scope of 215 orders, investigators have preferred to instead rely on relatively unfettered NSLs whenever possible. Almost 50,000 were issued in 2006 alone, and the majority were used to obtain information about U.S. persons.  These are slightly more restricted in their application, allowing acquisition of records from telecoms and “financial institutions,” but PATRIOT removed many limitations on the types of records that could be sought from those institutions, and post-PATRIOT reforms vastly expanded the definition of “financial institution” to cover many businesses we wouldn’t intuitively describe that way: pawnshops, casinos, travel agencies, businesses with lots of cash transactions, and probably your nephew’s piggy bank. Crucially, they are issued by investigative agencies—mainly the FBI—without court approval. Inspector General audits have discovered rampant misuse of this tool.

Both bills contain language parallel to their 215 sections requiring a tighter link between the records sought and the subject of the investigation. Significantly, the JUSTICE Act also restores pre-PATRIOT limitations on the kinds of records that can be sought, limiting NSLs to relatively basic information about clients or subscribers and requiring a court order for more sensitive data. The Leahy bill would establish a new four-year sunset for expanded NSL authorities; Feingold’s does not, presumably because it already substantially rolls back PATRIOT’s expansion of those authorities. Greg Nojeim of the Center for Democracy and Technology argues that NSL reform is the most important part of the PATRIOT reauthorization debate.

Gag Orders

NSLs and 215 orders are both routinely accompanied by gag orders, which several courts have found to raise significant First Amendment problems.  Both bills allow recipients of NSLs or 215 orders to challenge both the orders and any accompanying gag, and shift the burden of proof from the recipient to the government to show that the gag—now limited in duration, but renewable—is necessary to avert harm to an investigation or to national security. Previously recipients seeking to challenge a gag were in the unenviable position of proving that there was “no reason” to think disclosure could have any adverse consequence. JUSTICE, however, goes further in detailing the specific kinds of harms that may justify imposition of a gag, and requiring a showing a direct link between the alleged harm and the particular investigation, while the Leahy bill permits more generalized and vague allegations of harm.

Also covered under both bills are pen registers and trap-and-trace devices, typically bundled together under the rubric of “pen/trap” surveillance, which involves acquiring communications metadata—the numbers and times of incoming and outgoing phone calls, e-mail addresses, Web URLs visited, and the like—under a lower standard than would be required for a full-blown search or wiretap. Again echoing the language of their 215 and NSL provisions, both bills put some teeth into the “relevance” requirement by limiting whose metadata can be obtained. JUSTICE, however, also imposes these limits on criminal pen/trap orders for the first time, closing a potential loophole that would remain if only FISA pen/trap orders were covered.

Reporting and Audits

Finally, the Leahy and Feingold bills both include an array of enhanced reporting requirements, mandating somewhat more detailed public disclosure of how often different investigative tools are used. Leahy’s bill also requires the Inspector General of the Department of Justice to conduct a series of annual audits, with reports to Congress, on the use of “tangible things” orders, pen/trap surveillance, and NSLs.

JUSTICE-Only Reforms: FISA Amendments Act

That covers the terrain in which the two bills overlap.  But arguably the most important difference between the Leahy and Feingold bills—and along with more stringent NSL reform, perhaps the most important component of the JUSTICE Act that should be ported into whatever bill is finally reported out of Judiciary—concerns the changes made to the ill-advised FISA Amendments Act passed last year.That law gave the Attorney General broad power to authorize wiretaps aimed at communications between the U.S. and other countries, with only anemic court oversight.

The JUSTICE Act provides stronger barriers to “reverse targeting,” in which an authorization nominally directed at a party abroad is granted for the purpose of eavesdropping on a particular U.S. person’s foreign communications. The new language clarifies that surveillance is impermissibly “reverse targeted” when it is a “significant purpose”—as opposed to “the purpose”—of the surveillance to listen in on the American party. When one side of a communication is in the U.S., the bill triggers additional requirements that either the particular communication be relevant to terrorism (not merely “foreign intelligence,” which is far broader) or that the foreign side of the communication is affiliate with a terrorist group.

Perhaps most important of all, JUSTICE bars “bulk collection”—massive, vacuum cleaner acquisition of international communications—by requiring that at least one party to any communication “acquired” be an actual individual target, though not necessarily a named or known target. While this is plainly intended to prevent the kind of Orwellian computer-filtered fishing expeditions civil libertarians have worried might be authorized, it’s important to note that there’s a potentially huge loophole here, involving ambiguity about the point at which a communication is technically “acquired.” It’s too complicated to cover in detail here, but I’ve written about it in my previous life as a journalist. If, as the government has argued in the past, acquisition only occurs when an intercepted communication is “fixed in a human readable format,” the new language would bar bulk recording in an intelligible form, but not necessarily bar bulk collection for computer filtering. Again, the issues here are fairly complex, and I’m working on a paper that takes them up in greater detail.

Other JUSTICE-Only Reforms

There are a hodgepodge of other changes in the ambitious JUSTICE Act, and I’ll just mention very briefly some of the most important ones. The bill puts some stricter limits on the granting of so-called “Sneak-and-Peek” warrants, which allow for disclosure of a search to its target to be delayed for long periods. As David Rittgers observed yesterday, these were sold as necessary for terror investigations, but as with some other PATRIOT powers, have ended up being invoked overwhelmingly in ordinary criminal cases. It tweaks the language of a PATRIOT provision designed to allow monitoring of computer hackers to prevent abuse. It narrows the definition of the crime of “material support” for terrorism to make clear that it covers knowing support for criminal activities—as opposed to, say, humanitarian aid. And it ensures that PATRIOT’s definition of “domestic terrorism” can’t be applied to (legitimately illegal but non-terrorist) civil disobedience by political groups.

Either bill would do a great deal to halt the erosion of civil liberties safeguards we’ve seen over the past eight years, and in general these are reforms well crafted to provide oversight and checks against abuse without depriving investigators of tools vital to legitimate national security investigations. The most important items here, however, are the more stringent limitations on National Security letters embodied in the JUSTICE Act, and that legislation’s common-sense limits on the frankly astonishing discretion to authorize surveillance granted the executive branch under the FISA Amendments Act. How those provisions fare will tell us how serious Congress is about protecting civil liberties.

State Secrets, State Secrets Are No Fun

Despite Barack Obama’s frequent paeans to the value of transparency during the presidential campaign, his Justice Department has incensed civil liberties advocates by parroting the Bush administration’s broad invocations of the “state secrets privilege” in an effort to torpedo lawsuits challenging controversial interrogation and surveillance policies. Though in many cases the underlying facts have already been widely reported, DOJ lawyers implausibly claimed, not merely that particular classified information should not be aired in open court, but that any discussion of the CIA’s “extraordinary rendition” of detainees to torture-friendly regimes, or of the NSA’s warrantless wiretapping, would imperil national security.

That may—emphasis on may—finally begin to change as of October 1st, when new guidelines for the invocation of the privilege issued by Attorney General Eric Holder kick in. Part of the change is procedural: state secrets claims will need to go through a review board and secure the personal approval of the Attorney General. Substantively, the new rules raise the bar for assertions of privilege by requiring attorneys to provide courts with specific evidence showing reason to expect disclosure would result in “significant harm” to national security. Moreover, those assertions would have to be narrowly tailored so as to allow cases to proceed on the basis of as much information as can safely be disclosed.

That’s the theory, at any rate. The ACLU is skeptical, and argues that relying on AG guidelines to curb state secrets overreach is like relying on the fox to guard the hen house. And indeed, hours after the announcement of the new guidelines—admittedly not yet in effect—government attorneys were singing the state secrets song in a continuing effort to get a suit over allegations of illegal wiretapping tossed. The cynical read here is that the new guidelines are meant to mollify legislators contemplating statutory limits on state secrets claims while preserving executive discretion to continue making precisely the same arguments, so long as they add the word “significant” and jump through a few extra hoops. Presumably we’ll start to see how serious they are come October. And as for those proposed statutory limits, if the new administration’s commitment to greater  accountability is genuine, they should now have no objection to formal rules that simply reinforce the procedures and principles they’ve voluntarily embraced.