Tag: surveillance

The Virtual Fourth Amendment

I’ve just gotten around to reading Orin Kerr’s fine paper “Applying the Fourth Amendment to the Internet: A General Approach.”  Like most everything he writes on the topic of technology and privacy, it is thoughtful and worth reading.  Here, from the abstract, are the main conclusions:

First, the traditional physical distinction between inside and outside should be replaced with the online distinction between content and non-content information. Second, courts should require a search warrant that is particularized to individuals rather than Internet accounts to collect the contents of protected Internet communications. These two principles point the way to a technology-neutral translation of the Fourth Amendment from physical space to cyberspace.

I’ll let folks read the full arguments to these conclusions in Orin’s own words, but I want to suggest a clarification and a tentative objection.  The clarification is that, while I think the right level of particularity is, broadly speaking, the person rather than the account, search warrants should have to specify in advance either the accounts covered (a list of e-mail addresses) or the method of determining which accounts are covered (“such accounts as the ISP identifies as belonging to the target,” for instance).  Since there’s often substantial uncertainty about who is actually behind a particular online identity, the discretion of the investigator in making that link should be constrained to the maximum practicable extent.

The objection is that there’s an important ambiguity in the physical-space “inside/outside” distinction, and how one interprets it matters a great deal for what the online content/non-content distinction amounts to. The crux of it is this: Several cases suggest that surveillance conducted “outside” a protected space can nevertheless be surveillance of the “inside” of that space. The grandaddy in this line is, of course, Katz v. United States, which held that wiretaps and listening devices may constitute a “search” though they do not involve physical intrusion on private property. Kerr can accomodate this by noting that while this is surveillance “outside” physical space, it captures the “inside” of communication contents. But a greater difficulty is presented by another important case, Kyllo v. United States, with which Kerr deals rather too cursorily.

In Kyllo, the majority—led, perhaps surprisingly, by Justice Scalia!—found that the use without a warrant of a thermal imaging scanner to detect the use of marijuana growing lights in a private residence violated the Fourth Amendment. As Kerr observes, the crux of the disagreement between the majority and the dissent had to do with whether the scanner should be considered to be gathering private information about the interior of the house, or whether it only gathered information (about the relative warmth of certain areas of the house) that might have been obtained by ordinary observation from the exterior of the house.  No great theoretical problem, says Kerr: That only shows that the inside/outside line will sometimes be difficult to draw in novel circumstances. Online, for instance, we may be unsure whether to regard the URL of a specific Web page as mere “addressing” information or as “content”—first, because it typically makes it trivial to learn the content of what a user has read, and second, because URLs often contain the search terms manually entered by users. A similar issue arose with e-mail subject lines, which now seem by general consensus to be regarded as “content” even though they are transmitted in the “header” of an e-mail.

Focus on this familiar (if thorny) line drawing problem, however, misses what is important about the Kyllo case, and the larger problem it presents for Kerr’s dichotomy: Both the majority and the dissent seemed to agree that a more sophisticated scanner capable of detecting, say, the movements of persons within the house, would have constituted a Fourth Amendment search. But reflect, for a moment, on what this means given the way thermal imaging scanners operate. Infrared radiation emitted by objects within the house unambiguously ends up “outside” the house: A person standing on the public street cannot help but absorb some of it. What all the justices appeared to agree on, then, is that the collection and processing of information that is unambiguously outside the house, and is conducted entirely outside the house, may nevertheless amount to a search because it is surveillance of and yields information about the inside of the house. This means that there is a distinction between the space where information is acquired and the space about which it is acquired.

This matters for Kerr’s proposed content/non-content distinction, because in very much the same way, sophisticated measurement and analysis of non-content information may well yield information about content. A few examples may help to make this clear. Secure Shell (SSH) is an encrypted protocol for secure communications. In its interactive mode, SSH transmits each keystroke as a distinct packet—and this packet transmission information is non-content information of the sort that might be obtained, say, using a so-called pen/trap order, issued using a standard of mere “relevance” to an investigation, rather than the “probable cause” required for a full Fourth Amendment search—the same standard Kerr agrees should apply to communications. Yet there are strong and regular patterns in the way human beings type different words on a standard keyboard, such that the content of what is typed—under SSH or any realtime chat protocol that transmits each keystroke as a packet—may be deducible from the non-content packet transmission data given sufficiently advanced analytic algorithms. The analogy to the measurement and analysis of infrared radiation in Kyllo is, I think, quite strong.

It is not hard to come up with a plethora of similar examples. By federal statute, records of the movies a person rents enjoy substantial privacy protection, and the standard for law enforcement to obtain them—probable cause showing of “relevance” and prior notice to the consumer—is higher than required for a mere pen/trap. Yet precise analysis of the size of a file transmitted from a service like Netflix or iTunes could easily reveal either the specific movie or program downloaded, or at the least narrow it down to a reasonably small field of possibilities. Logs of the content-sensitive advertising served by a service like Gmail to a particular user may reveal general information about the contents of user e-mails. Sophisticated social network analysis based on calling or e-mailing patterns of multiple users may reveal, not specific communications contents, but information about the membership and internal structure of various groups and organizations. That amounts to revealing the “contents” of group membership lists, which could have profound First Amendment implications in light of a string of Supreme Court precedents making it clear that state compelled disclosure of such lists may impermissibly burden the freedom of expressive association even when it does not run afoul of Fourth Amendment privacy protections. And running back to Kyllo, especially as “smart” appliances and ubiquitous networked computing become more pervasive, analysis of non-content network traffic may reveal enormous amounts of information about the movements and activities of people within private homes.

Here’s one way to describe the problem here: The combination of digital technology and increasingly sophisticated analytic methods have complicated the intuitive link between what is directly observed or acquired and what is ultimately subject to surveillance in a broader sense. The natural move here is to try to draw a distinction between what is directly “acquired” and what is learned by mere “inference” from the information acquired. I doubt such a distinction will hold up. It takes a lot of sophisticated processing to turn ambient infrared radiation into an image of the interior of a home; the majority in Kyllo was not sympathetic to the argument that this was mere “inference.” Strictly speaking, after all, the data pulled off an Internet connection is nothing but a string of ones and zeroes. It is only a certain kind of processing that renders it as the text of an e-mail or an IM transcript. If a different sort of processing can derive the same transcript—or at least a fair chunk of it—from the string of ones and zeroes representing packet transmission timing, should we presume there’s a deep constitutional difference?

That is not to say there’s anything wrong with Kerr’s underyling intuition.  But it does, I think, suggest that new technologies will increasingly demand that privacy analysis not merely look at what is acquired but at what is done with it. In a way, the law’s hyperfocus on the moment of acquisition as the unique locus of Fourth Amendment blessing or damnation is the shadow of the myopically property-centric jurisprudence the Court finally found to be inadequate in Katz. As Kerr intimates in his paper, shaking off the digital echoes of that legacy—with its convenient bright lines—is apt to make things fiendishly complex, at least in the initial stages.  But I doubt it can be avoided much longer.

Three Keys to Surveillance Success: Location, Location, Location

The invaluable Chris Soghoian has posted some illuminating—and sobering—information on the scope of surveillance being carried out with the assistance of telecommunications providers.  The entire panel discussion from this year’s ISS World surveillance conference is well worth listening to in full, but surely the most striking item is a direct quotation from Sprint’s head of electronic surveillance:

[M]y major concern is the volume of requests. We have a lot of things that are automated but that’s just scratching the surface. One of the things, like with our GPS tool. We turned it on the web interface for law enforcement about one year ago last month, and we just passed 8 million requests. So there is no way on earth my team could have handled 8 million requests from law enforcement, just for GPS alone. So the tool has just really caught on fire with law enforcement. They also love that it is extremely inexpensive to operate and easy, so, just the sheer volume of requests they anticipate us automating other features, and I just don’t know how we’ll handle the millions and millions of requests that are going to come in.

To be clear, that doesn’t mean they are giving law enforcement geolocation data on 8 million people. He’s talking about the wonderful automated backend Sprint runs for law enforcement, LSite, which allows investigators to rapidly retrieve information directly, without the burden of having to get a human being to respond to every specific request for data.  Rather, says Sprint, each of those 8 million requests represents a time when an FBI computer or agent pulled up a target’s location data using their portal or API. (I don’t think you can Tweet subpoenas yet.)  For an investigation whose targets are under ongoing realtime surveillance over a period of weeks or months, that could very well add up to hundreds or thousands of requests for a few individuals. So those 8 million data requests, according to a Sprint representative in the comments, actually “only” represent “several thousand” discrete cases.

As Kevin Bankston argues, that’s not entirely comforting. The Justice Department, Soghoian points out, is badly delinquent in reporting on its use of pen/trap orders, which are generally used to track communications routing information like phone numbers and IP addresses, but are likely to be increasingly used for location tracking. And recent changes in the law may have made it easier for intelligence agencies to turn cell phones into tracking devices.  In the criminal context, the legal process for getting geolocation information depends on a variety of things—different districts have come up with different standards, and it matters whether investigators want historical records about a subject or ongoing access to location info in real time. Some courts have ruled that a full-blown warrant is required in some circumstances, in other cases a “hybrid” order consisting of a pen/trap order and a 2703(d) order. But a passage from an Inspector General’s report suggests that the 2005 PATRIOT reauthorization may have made it easier to obtain location data:

After passage of the Reauthorization Act on March 9, 2006, combination orders became unnecessary for subscriber information and [REDACTED PHRASE]. Section 128 of the Reauthorization Act amended the FISA statute to authorize subscriber information to be provided in response to a pen register/trap and trace order. Therefore, combination orders for subscriber information were no longer necessary. In addition, OIPR determined that substantive amendments to the statute undermined the legal basis for which OIPR had received authorization [REDACTED PHRASE] from the FISA Court. Therefore, OIPR decided not to request [REDACTED PHRASE] pursuant to Section 215 until it re-briefed the issue for the FISA Court. As a result, in 2006 combination orders were submitted to the FISA Court only from January 1, 2006, through March 8, 2006.

The new statutory language permits FISA pen/traps to get more information than is allowed under a traditional criminal pen/trap, with a lower standard of review, including “any temporarily assigned network address or associated routing or transmission information.” Bear in mind that it would have made sense to rely on a 215 order only if the information sought was more extensive than what could be obtained using a National Security Letter, which requires no judicial approval. That makes it quite likely that it’s become legally easier to transform a cell phone into a tracking device even as providers are making it point-and-click simple to log into their servers and submit automated location queries.  So it’s become much more  urgent that the Justice Department start living up to its obligation to start telling us how often they’re using these souped-up pen/traps, and how many people are affected.  In congressional debates, pen/trap orders are invariably mischaracterized as minimally intrusive, providing little more than the list of times and phone numbers they produced 30 years ago.  If they’re turning into a plug-and-play solution for lojacking the population, Americans ought to know about it.

If you’re interested enough in this stuff to have made it through that discussion, incidentally, come check out our debate at Cato this afternoon, either in the flesh or via webcast. There will be a simultaneous “tweetchat” hosted by the folks at Get FISA Right.

The FISA Amendments: Behind the Scenes

I’ve been poring over the trove of documents the Electronic Frontier Foundation has obtained detailing the long process by which the FISA Amendments Act—which substantially expanded executive power to conduct sweeping surveillance with little oversight—was hammered out between Hill staffers and lawyers at the Department of Justice and intelligence agencies. The really interesting stuff, of course, is mostly redacted, and I’m only partway though the stacks, but there are a few interesting tidbits so far.

As Wired has already reported, one e-mail shows Bush officials feared that if the attorney general was given too much discretion over retroactive immunity for telecoms that aided in warrantless wiretapping, the next administration might refuse to provide it.

A couple other things stuck out for me. First, while it’s possible they’ve been released before and simply not crossed my desk, there are a series of position papers — so rife with  underlining that they look like some breathless magazine subscription pitch — circulated to Congress explaining the Bush administration’s opposition to various proposed amendments to the FAA. Among these was a proposal by Sen. Russ Feingold (D-WI) that would have barred “bulk collection” of international traffic and required that the broad new intelligence authorizations specify (though not necessarily by name) individual targets. The idea here was that if there were particular suspected terrorists (for instance) being monitored overseas, it would be fine to keep monitoring their communications if they began talking with Americans without pausing to get a full-blown warrant — but you didn’t want to give NSA carte blanche to just indiscriminately sweep in traffic between the U.S. and anyone abroad. The position paper included in these documents is more explicit than the others that I’ve seen about the motive for objecting to the bulk collection amendment. Which was, predictably, that they wanted to do bulk collection:

  • It also would prevent the intelligence community from conducting the types of intelligence collection necessary to track terrorits and develop new targets.
  • For example, this amendment could prevent the intelligence community from targeting a particular group of buildings or a geographic area abroad to collect foreign intelligence prior to operations by our armed forces.

So to be clear: Contra the rhetoric we heard at the time, the concern was not simply that NSA would be able to keep monitoring a suspected terrorist when he began calling up Americans. It was to permit the “targeting” of entire regions, scooping all communications between the United States and the chosen area.

One other exchange at least raises an eyebrow.  If you were following the battle in Congress at the time, you may recall that there was a period when the stopgap Protect America Act had expired — though surveillance authorized pursuant to the law could continue for many months — and before Congress approved the FAA. A week into that period, on February 22, 2008, the attorney general and director of national intelligence sent a letter warning Congress that they were now losing intelligence because providers were refusing to comply with new requests under existing PAA authorizations. A day later, they had to roll that back, and some of the correspondence from the EFF FOIA record makes clear that there was an issue with a single recalcitrant provider who decided to go along shortly after the letter was sent.

But there’s another wrinkle. A week prior to this, just before the PAA was set to expire, Jeremy Bash, the chief counsel for the House Permanent Select Committee on Intelligence, sent an email to “Ken and Ben,” about a recent press conference call. It’s clear from context that he’s writing to Assistant Attorney General Kenneth Wainstein and General Counsel for the Director of National Intelligence Ben Powell about this press call, where both men fairly clearly suggest that telecoms are balking for fear that they’ll no longer be immune from liability for participation in PAA surveillance after the statute lapses. Bash wants to confirm whether they really said that “private sector entities have refused to comply with PAA certifications because they were concerned that the law was temporary.” In particular, he wants to know whether this is actually true, because “the briefs I read provided a very different rationale.”  In other words, Bash — who we know was cleared for the most sensitive information about NSA surveillance — was aware of some service providers being reluctant to comply with “new taskings” under the law, but not because of the looming expiration of the statute. One of his correspondents — whether Wainstein or Powell is unclear — shoots back denying having said any such thing (read the transcript yourself) and concluding with a terse:

Not addressing what is in fact the situation on both those issues (compliance and threat to halt) on this email.

In other words, the actual compliance issues they were encountering would have to be discussed over a more secure channel. If the issue wasn’t the expiration, though, what would the issue have been? The obvious alternative possibility is that NSA (or another agency) was attempting to get them to carry out surveillance that they thought might fall outside the scope of either the PAA or a particular authorization. Given how sweeping these were, that should certainly give us pause. It should also raise some questions as to whether, even before that one holdout fell into compliance, the warning letter from the AG and the DNI was misleading. Was there really ever a “gap” resulting from the statute’s sunset, or was it a matter of telecoms balking at an attempt by the intelligence community to stretch the bounds of their legal authority? The latter would certainly fit a pattern we saw again and again under the Bush administration: break the law, inducing a legal crisis, then threaten bloody mayhem if the unlawful program is forced to abruptly halt — at which point a nervous Congress grants its blessing.

Who Reads the Readers?

This is a reminder, citizen: Only cranks worry about vastly increased governmental power to gather transactional data about Americans’ online behavior. Why, just last week, Rep. Lamar Smith (R-TX) informed us that there has not been any “demonstrated or recent abuse” of such authority by means of National Security Letters, which permit the FBI to obtain many telecommunications records without court order. I mean, the last Inspector General report finding widespread and systemic abuse of those came out, like, over a year ago! And as defenders of expanded NSL powers often remind us, similar records can often be obtained by grand jury subpoena.

Subpoenas like, for instance, the one issued last year seeking the complete traffic logs of the left-wing site Indymedia for a particular day. According to tech journo Declan McCullah:

It instructed [System administrator Kristina] Clair to “include IP addresses, times, and any other identifying information,” including e-mail addresses, physical addresses, registered accounts, and Indymedia readers’ Social Security Numbers, bank account numbers, credit card numbers, and so on.

The sweeping request came with a gag order prohibiting Clair from talking about it. (As a constitutional matter, courts have found that recipients of such orders must at least be allowed to discuss them with attorneys in order to seek advise about their legality, but the subpoena contained no notice of that fact.) Justice Department officials tell McCullagh that the request was never reviewed directly by the Attorney General, as is normally required when information is sought from a press organization. Clair did tell attorneys at the Electronic Frontier Foundation, and  when they wrote to U.S. Attorney Timothy Morrison questioning the propriety of the request, it was promptly withdrawn. EFF’s Kevin Bankston explains the legal problems with the subpoena at length.

Perhaps ironically, the targeting of Indymedia, which is about as far left as news sites get, may finally hep the populist right to the perils of the burgeoning surveillance state. It seems to have piqued Glenn Beck’s interest, and McCullagh went on Lou Dobbs’ show to talk about the story. Thus far, the approved conservative position appears to have been that Barack Obama is some kind of ruthless Stalinist with a secret plan to turn the United States into a massive gulag—but under no circumstances should there be any additional checks on his administration’s domestic spying powers.  This always struck me as both incoherent and a tragic waste of paranoia. Now that we’ve had a rather public reminder that such powers can be used to compile databases of people with politically unorthodox browsing habits, perhaps Beck—who seems to be something of an amateur historian—will take some time to delve into the story of COINTELPRO and other related projects our intelligence community busied itself with before we established an architecture of surveillance oversight in the late ’70s.

You know, the one we’ve spent the past eight years dismantling.

Some Thoughts on the New Surveillance

Last night I spoke at “The Little Idea,” a mini-lecture series launched in New York by Ari Melber of The Nation and now starting up here in D.C., on the incredibly civilized premise that, instead of some interminable panel that culminates in a series of audience monologues-disguised-as-questions, it’s much more appealing to have a speaker give a ten-minute spiel, sort of as a prompt for discussion, and then chat with the crowd over drinks.

I’d sketched out a rather longer version of my remarks in advance just to make sure I had my main ideas clear, and so I’ll post them here, as a sort of preview of a rather longer and more formal paper on 21st century surveillance and privacy that I’m working on. Since ten-minute talks don’t accommodate footnotes very well, I should note that I’m drawing for a lot of these ideas on the excellent work of legal scholars Lawrence Lessig and Daniel Solove (relevant papers at the links). Anyway, the expanded version of my talk after the jump:

Since this is supposed to be an event where the drinking is at least as important as the talking, I want to begin with a story about booze—the story of a guy named Roy Olmstead.  Back in the days of Prohibition, Roy Olmstead was the youngest lieutenant on the Seattle police force. He spent a lot of his time busting liquor bootleggers, and in the course of his duties, he had two epiphanies. First, the local rum runners were disorganized—they needed a smart kingpin who’d run the operation like a business. Second, and more importantly, he realized liquor smuggling paid a lot better than police work.

So Roy Olmstead decided to change careers, and it turned out he was a natural. Within a few years he had remarried to a British debutante, bought a big white mansion, and even ran his own radio station—which he used to signal his ships, smuggling hooch down from Canada, via coded messages hidden in broadcasts of children’s bedtime stories. He did retain enough of his old ethos, though, that he forbade his men from carrying guns. The local press called him the Bootleg King of Puget Sound, and his parties were the hottest ticket in town.

Roy’s success did not go unnoticed, of course, and soon enough the feds were after him using their own clever high-tech method: wiretapping. It was so new that they didn’t think they needed to get a court warrant to listen in on phone conversations, and so when the hammer came down, Roy Olmstead challenged those wiretaps in a case that went all the way to the Supreme Court—Olmstead v. U.S.

The court had to decide whether these warrantless wiretaps had violated the Fourth Amendment “right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures.” But when the court looked at how a “search” had traditionally been defined, they saw that it was tied to the common law tort of trespass. Originally, that was supposed to be your remedy if you thought your rights had been violated, and a warrant was a kind of shield against a trespass lawsuit. So the majority didn’t see any problem: “There was no search,” they wrote, “there was no seizure.” Because a search was when the cops came on to your property, and a seizure was when they took your stuff. This was no more a search than if the police had walked by on the sidewalk and seen Roy unpacking a crate of whiskey through his living room window: It was just another kind of non-invasive observation.

So Olmstead went to jail, and came out a dedicated evangelist for Christian Science. It wasn’t until the year after Olmstead died, in 1967, that the Court finally changed its mind in a case called Katz v. U.S.: No, they said, the Fourth Amendment protects people and not places, and so instead of looking at property we’re going to look at your reasonable expectation of privacy, and on that understanding, wiretaps are a problem after all.

So that’s a little history lesson—great, so what? Well, we’re having our own debate about surveillance as Congress considers not just reauthorization of some expiring Patriot Act powers, but also reform of the larger post-9/11 surveillance state, including last year’s incredibly broad amendments to the Foreign Intelligence Surveillance Act. And I see legislators and pundits repeating two related types of mistakes—and these are really conceptual mistakes, not legal mistakes—that we can now, with the benefit of hindsight, more easily recognize in the logic of Olmstead: One is a mistake about technology; the other is a mistake about the value of privacy.

First, the technology mistake. The property rule they used in Olmstead was founded on an assumption about the technological constraints on observation. The goal of the Fourth Amendment was to preserve a certain kind of balance between individual autonomy and state power. The mechanism for achieving that goal was a rule that established a particular trigger or tripwire that would, in a sense, activate the courts when that boundary was crossed in order to maintain the balance. Establishing trespass as the trigger made sense when the sphere of intimate communication was coextensive with the boundaries of your private property. But when technology decoupled those two things, keeping the rule the same no longer preserved the balance, the underlying goal, in the same way, because suddenly you could gather information that once required trespass without hitting that property tripwire.

The second and less obvious error has to do with a conception of the value of privacy, and a corresponding idea of what a privacy harm looks like.  You could call the Olmstead court’s theory “Privacy as Seclusion,” where the paradigmatic violation is the jackboot busting down your door and disturbing the peace of your home. Wiretapping didn’t look like that, and so in one sense it was less intrusive—invisible, even. In another sense, it was more intrusive because it was invisible: Police could listen to your private conversations for months at a time, with you none the wiser. The Katz court finally understood this; you could call their theory Privacy as Secrecy, where the harm is not intrusion but disclosure.

But there’s an even less obvious potential harm here. If they didn’t need a warrant, everyone who made a phone call would know that they could whenever they felt like it. Wiretapping is expensive and labor intensive enough that realistically they can only be gathering information about a few people at a time.  But if further technological change were to remove that constraint, then the knowledge of the permanent possibility of surveillance starts having subtle effects on people’s behavior—if you’ve seen the movie The Lives of Others you can see an extreme case of an ecology of constant suspicion—and that persists whether or not you’re actually under surveillance.  To put it in terms familiar to Washingtonians: Imagine if your conversations had to be “on the record” all the time. Borrowing from Michel Foucault, we can say the privacy harm here is not (primarily) invasion or disclosure but discipline. This idea is even embedded in our language: When we say we want to control and discipline these police powers, we talk about the need for over-sight and super-vision, which are etymologically basically the same word as sur-veillance.

Move one more level from the individual and concrete to the abstract and social harms, and you’ve got the problem (or at least the mixed blessing) of what I’ll call legibility. The idea here is that the longer term possibilities of state control—the kinds of power that are even conceivable—are determined in the modern world by the kind and quantity of information the modern state has, not about discrete individuals, but about populations.  So again, to reach back a few decades, the idea that maybe it would be convenient to round up all the Americans of Japanese ancestry—or some other group—and put them in internment camps is just not even on the conceptual menu unless you have a preexisting informational capacity to rapidly filter and locate your population that way.

Now, when we talk about our First Amendment right to free speech, we understand it has a certain dual character: That there’s an individual right grounded in the equal dignity of free citizens that’s violated whenever I’m prohibited from expressing my views. But also a common or collective good that is an important structural precondition of democracy. As a citizen subject to democratic laws, I have a vested interest in the freedom of political discourse whether or not I personally want to say–or even listen to–controversial speech. Looking at the incredible scope of documented intelligence abuses from the 60s and 70s, we can add that I have an interest in knowing whether government officials are trying to silence or intimidate inconvenient journalists, activists, or even legislators. Censorship and arrest are blunt tactics I can see and protest; blackmail or a calculated leak that brings public disgrace are not so obvious. As legal scholar Bill Stuntz has argued, the Founders understood the structural value of the Fourth Amendment as a complement to the First, because it is very hard to make it a crime to pray the wrong way or to discuss radical politics if the police can’t arbitrarily see what people are doing or writing in their homes.

Now consider how we think about our own contemporary innovations in search technology. The marketing copy claims PATRIOT and its offspring “update” investigative powers for the information age—but what we’re trying to do is stretch our traditional rules and oversight mechanisms to accommodate search tools as radically novel now as wiretapping was in the 20s. On the traditional model, you want information about a target’s communications and conduct, so you ask a judge to approve a method of surveillance, using standards that depend on how intrusive the method is and how secret and sensitive the information is. Constrained by legal rulings from a very different technological environment, this model assumes that information held by third parties—like your phone or banking or credit card information—gets very little protection, since it’s not really “secret” anymore. And the sensitivity of all that information is evaluated in isolation, not in terms of the story that might emerge from linking together all the traces we now inevitable leave in the datasphere every day.

The new surveillance typically seeks to observe information about conduct and communications in order to identify targets. That may mean using voiceprint analysis to pull matches for a particular target’s voice or a sufficiently unusual regional dialect in a certain area. It may mean content analysis to flag e-mails or voice conversations containing known terrorist code phrases. It may mean social graph analysis to reidentify targets who have changed venues by their calling patterns.  If you’re on Facebook, and a you and bunch of your friends all decide to use fake names when you sign up for Twitter, I can still reidentify you given sufficient computing power and strong algorithms by mapping the shape of the connections between you—a kind of social fingerprinting. It can involve predictive analysis based on powerful electronic “classifiers” that extract subtle patterns of travel or communication or purchases common to past terrorists in order to write their own algorithms for detecting potential ones.

Bracket for the moment whether we think some or all of these methods are wise.  It should be crystal clear that a method of oversight designed for up front review and authorization of target-based surveillance is going to be totally inadequate as a safeguard for these new methods.  It will either forbid them completely or be absent from the parts of the process where the dangers to privacy exist. In practice what we’ve done is shift the burden of privacy protection to so-called “minimization” procedures that are meant to archive or at least anonymize data about innocent people. But those procedures have themselves been rendered obsolete by technologies of retrieval and reidentification: No sufficiently large data set is truly anonymous.

And realize the size of the data sets we’re talking about. The FBI’s Information Data Warehouse holds at least 1.5 billion records, and growing fast, from an array of private and government sector sources—some presumably obtained using National Security Letters and Patriot 215 orders, some by other means. Those NSLs are issued by the tens of thousands each year, mostly for information about Americans.  As of 2006, we know “some intelligence sources”—probably NSA’s—were  growing at a rate of 4 petabytes, that’s 4 million Gigabytes—each month.  Within about five years, NSA’s archive is expected to be measured in Yottabytes—if you want to picture one Yottabyte, take the sum total of all data on the Internet—every web page, audio file, and video—and multiply it by 2,000. At that point they will have to make up a new word for the next largest unit of data.  As J. Edgar Hoover understood all too well, just having that information is a form of power. He wasn’t the most feared man in Washington for decades because he necessarily had something on everyone—though he had a lot—but because he had so much that you really couldn’t be sure what he had on you.

There is, to be sure, a lot to be said against the expansion of surveillance powers over the past eight years from a more conventional civil liberties perspective.  But we also need to be aware that if we’re not attuned to the way new technologies may avoid our would tripwires, if we only think of privacy in terms of certain familiar, paradigmatic violations—the boot in the door—then like the Olmstead court, we may render ourselves blind to equally serious threats that don’t fit our mental picture of a privacy harm.

If we’re going to avoid this, we need to attune ourselves to the ways modern surveillance is qualitatively different from past search tools, even if words like “wiretap” and “subpoena” remain the same. And we’re going to need to stop thinking only in terms of isolated violations of individual rights, but also consider the systemic and structural effects of the architectures of surveillance we’re constructing.

Topics:

Totalitarian Leftovers in Eastern Europe

The Berlin Wall fell 20 years ago.  A hideous symbol of the suppression of liberty, it should remind us of the ever-present threat to our freedoms.  Even two decades later the legacy of repression continues to afflict many people in Eastern Europe.  For instance, those in countries formerly behind the Iron Curtain still struggle with the knowledge that their friends and neighbors routinely spied on them.

Reports the Associated Press:

Stelian Tanase found out when he asked to see the thick file that Romania’s communist-era secret police had kept on him. The revelation nearly knocked the wind out of him: His closest pal was an informer who regularly told agents what Tanase was up to.

“In a way, I haven’t even recovered today,” said Tanase, a novelist who was placed under surveillance and had his home bugged during the late dictator Nicolae Ceausescu’s regime.

“He was the one person on Earth I had the most faith in,” he said. “And I never, ever suspected him.”

Twenty years ago this autumn, communism collapsed across Eastern Europe. But its dark legacy endures in the unanswered question of the files — whether letting the victims read them cleanses old wounds or rips open new ones.

Things have never been so bad here, obviously, but that gives us even more reason to jealously guard our liberties.  Defend America we must, but we must never forget that it is a republic which we are defending.

PATRIOT Powers: Roving Wiretaps

Last week, I wrote a piece for Reason in which I took a close look at the USA PATRIOT Act’s “lone wolf” provision—set to expire at the end of the year, though almost certain to be renewed—and argued that it should be allowed to lapse. Originally, I’d planned to survey the whole array of authorities that are either sunsetting or candidates for reform, but ultimately decided it made more sense to give a thorough treatment to one than trying to squeeze an inevitably shallow gloss on four or five complex areas of law into the same space. But the Internets are infinite, so I’ve decided I’d turn the Reason piece into Part I of a continuing series on PATRIOT powers.  In this edition: Section 206, roving wiretap authority.

The idea behind a roving wiretap should be familiar if you’ve ever watched The Wire, where dealers used disposable “burner” cell phones to evade police eavesdropping. A roving wiretap is used when a target is thought to be employing such measures to frustrate investigators, and allows the eavesdropper to quickly begin listening on whatever new phone line or Internet account his quarry may be using, without having to go back to a judge for a new warrant every time. Such authority has long existed for criminal investigations—that’s “Title III” wiretaps if you want to sound clever at cocktail parties—and pretty much everyone, including the staunchest civil liberties advocates, seems to agree that it also ought to be available for terror investigations under the Foreign Intelligence Surveillance Act. So what’s the problem here?

 

To understand the reasons for potential concern, we need to take a little detour into the differences between electronic surveillance warrants under Title III and FISA. The Fourth Amendment imposes two big requirements on criminal warrants: “probable cause” and “particularity”. That is, you need evidence that the surveillance you’re proposing has some connection to criminal activity, and you have to “particularly [describe] the place to be searched and the persons or things to be seized.” For an ordinary non-roving wiretap, that means you show a judge the “nexus” between evidence of a crime and a particular “place” (a phone line, an e-mail address, or a physical location you want to bug). You will often have a named target, but you don’t need one: If you have good evidence gang members are meeting in some location or routinely using a specific payphone to plan their crimes, you can get a warrant to bug it without necessarily knowing the names of the individuals who are going to show up. On the other hand, though, you do always need that criminal nexus: No bugging Tony Soprano’s AA meeting unless you have some reason to think he’s discussing his mob activity there. Since places and communications facilities may be used for both criminal and innocent persons, the officer monitoring the facility is only supposed to record what’s pertinent to the investigation.

When the tap goes roving, things obviously have to work a bit differently. For roving taps, the warrant shows a nexus between the suspected crime and an identified target. Then, as surveillance gets underway, the eavesdroppers can go up on a line once they’ve got a reasonable belief that the target is “proximate” to a location or communications facility. It stretches that “particularity” requirement a bit, to be sure, but the courts have thus far apparently considered it within bounds. It may help that they’re not used with great frequency: Eleven were issued last year, all to state-level investigators, for narcotics and racketeering investigations.

Surveillance law, however, is not plug-and-play. Importing a power from the Title III context into FISA is a little like dropping an unfamiliar organism into a new environment—the consequences are unpredictable, and may well be dramatic. The biggest relevant difference is that with FISA warrants, there’s always a “target”, and the “probable cause” showing is not of criminal activity, but of a connection between that target and a “foreign power,” which includes terror groups like Al Qaeda. However, for a variety of reasons, both regular and roving FISA warrants are allowed to provide only a description of the target, rather than the target’s identity. Perhaps just as important, FISA has a broader definition of the “person” to be specified as a “target” than Title III. For the purposes of criminal wiretaps, a “person” means any “individual, partnership, association, joint stock company, trust, or corporation.” The FISA definition of “person” includes all of those, but may also be any “group, entity, …or foreign power.” Some, then, worry that roving authority could be used to secure “John Doe” warrants that don’t specify a particular location, phone line, or Internet account—yet don’t sufficiently identify a particular target either. Congress took some steps to attempt to address such concerns when they reauthorized Section 206 back in 2005, and other legislators have proposed further changes—which I’ll get to in a minute. But we actually need to understand a few more things about the peculiarities of FISA wiretaps to see why the risk of overbroad collection is especially high here.

In part because courts have suggested that the constraints of the Fourth Amendment bind more loosely in the foreign intelligence context, FISA surveillance is generally far more sweeping in its acquisition of information. In 2004, the FBI gathered some 87 years worth of foreign language audio recordings alone pursuant to FISA warrants. As David Kris (now assistant attorney general for the Justice Department’s National Security Division) explains in his definitive text on the subject, a FISA warrant typically “permits aquisition of nearly all information from a monitored facility or a searched location.” (This may be somewhat more limited for roving taps; I’ll return to the point shortly.) As a rare public opinion from the FISA Court put it in 2002: “Virtually all information seized, whether by electronic surveillance or physical search, is minimized hours, days, or weeks after collection.” The way this is supposed to be squared with the Fourth Amendment rights of innocent Americans who may be swept up in such broad interception is via those “minimization” procedures, employed after the fact to filter out irrelevant information.

That puts a fairly serious burden on these minimization procedures, however, and it’s not clear that they well bear it. First, consider the standard applied. The FISA Court explains that “communications of or concerning United States persons that could not be foreign intelligence information or are not evidence of a crime… may not be logged or summarized” (emphasis added). This makes a certain amount of sense: FISA intercepts will often be in unfamiliar languages, foreign agents will often speak in coded language, and the significance of a particular statement may not be clear initially. But such a deferential standard does mean they’re retaining an awful lot of data. And indeed, it’s important to recognize that “minimization” does not mean “deletion,” as the Court’s reference to “logs” and “summaries” hints. Typically intercepts that are “minimized” simply aren’t logged for easy retrieval in a database. In the 80s, this may have been nearly as good for practical purposes as deletion; with the advent of powerful audio search algorithms capable of scanning many hours of recording quickly for particular words or voices, it may not make much difference. And we know that much more material than is officially “retained” remains available to agents. In the 2003 case U.S. v. Sattar, pursuant to FISA surveillance, “approximately 5,175 pertinent voice calls .. were not minimized.”  But when it came time for the discovery phase of a criminal trial against the FISA targets, the FBI “retrieved and disclosed to the defendants over 85,000 audio files … obtained through FISA surveillance.”

Cognizant of these concerns, Congress tried to add some safeguards in 2005 when they reauthorized the PATRIOT Act. FISA warrants are still permitted to work on descriptions of a target, but the word “specific” was added, presumably to reinforce that the description must be precise enough to uniquely pick out a person or group. They also stipulated that eavesdroppers must inform the FISA Court within ten days of any new facility they eavesdrop on, and explain the “facts justifying a belief that the target is using, or is about to use, that new facility or place.”

Better, to be sure; but without access to the classified opinions of the FISA Court, it’s quite difficult to know just what this means in practice. In criminal investigations, we have a reasonable idea of what the “proximity” standard for roving taps entails. Maybe a target checks into a hotel with a phone in the room, or a dealer is observed to walk up to a pay phone, or to buy a “burner.” It is much harder to guess how the “is using or is about to use” standard will be construed in light of FISA’s vastly broader presumption of sweeping up-front acquisition. Again, we know that the courts have been satisfied to place enormous weight on after-the-fact minimization of communications, and it seems inevitable that they will do so to an even greater extent when they only learn of a new tap ten days (or 60 days with good reason) after eavesdropping has commenced.

We also don’t know how much is built into that requirement that warrants name a “specific” target, and there’s a special problem here when surveillance roves across not only facilities but types of facility. Suppose, for instance, that a FISA warrant is issued for me, but investigators have somehow been unable to learn my identity. Among the data they have obtained for their description, however, are a photograph, a voiceprint from a recording of my phone conversation with a previous target, and the fact that I work at the Cato Institute. Now, this is surely sufficient to pick me out specifically for the purposes of a warrant initially meant for telephone or oral surveillance.  The voiceprint can be used to pluck all and only my conversations from the calls on Cato’s lines. But a description sufficient to specify a unique target in that context may not be sufficient in the context of, say, Internet surveillance, as certain elements of the description become irrelevant, and the remaining threaten to cover a much larger pool of people. Alternatively, if someone has a very unusual regional dialect, that may be sufficiently specific to pinpoint their voice in one location or community using a looser matching algorithm (perhaps because there is no actual recording, or it is brief or of low quality), but insufficient if they travel to another location where many more people have similar accents.

Russ Feingold (D-WI) has proposed amending the roving wiretap language so as to require that a roving tap identify the target. In fact, it’s not clear that this quite does the trick either. First, just conceptually, I don’t know that a sufficiently precise description can be distinguished from an “identity.” There’s an old and convoluted debate in the philosophy of language about whether proper names refer directly to their objects or rather are “disguised definite descriptions,” such that “Julian Sanchez” means “the person who is habitually called that by his friends, works at Cato, annoys others by singing along to Smiths songs incessantly…” and so on.  Whatever the right answer to that philosophical puzzle, clearly for the practical purposes at issue here, a name is just one more kind of description. And for roving taps, there’s the same kind of scope issue: Within Washington, DC, the name “Julian Sanchez” probably either picks me out uniquely or at least narrows the target pool down to a handful of people. In Spain or Latin America—or, more relevant for our purposes, in parts of the country with very large Hispanic communities—it’s a little like being “John Smith.”

This may all sound a bit fanciful. Surely sophisticated intelligence officers are not going to confuse Cato Research Fellow Julian Sanchez with, say, Duke University Multicultural Affairs Director Julian Sanchez? And of course, that is quite unlikely—I’ve picked an absurdly simplistic example for purposes of illustration. But there is quite a lot of evidence in the public record to suggest that intelligence investigations have taken advantage of new technologies to employ “targeting procedures” that do not fit our ordinary conception of how search warrants work. I mentioned voiceprint analysis above; keyword searches of both audio and text present another possibility.

We also know that individuals can often be uniquely identified by their pattern of social or communicative connections. For instance, researchers have found that they can take a completely anonymized “graph” of the social connections on a site like Facebook—basically giving everyone a name instead of a number, but preserving the pattern of who is friends with whom—and then use that graph to relink the numbers to names using the data of a differentbut overlapping social network like Flickr or Twitter. We know the same can be (and is) done with calling records—since in a sense your phone bill is a picture of another kind of social network. Using such methods of pattern analysis, investigators might determine when a new “burner” phone is being used by the same person they’d previously been targeting at another number, even if most or all of his contacts have alsoswitched phone numbers. Since, recall, the “person” who is the “target” of FISA surveillance may be a “group” or other “entity,” and since I don’t think Al Qaeda issues membership cards, the “description” of the target might consist of a pattern of connections thought to reliably distinguish those who are part of the group from those who merely have some casual link to another member.

This brings us to the final concern about roving surveillance under FISA. Criminal wiretaps are always eventually disclosed to their targets after the fact, and typically undertaken with a criminal trial in mind—a trial where defense lawyers will pore over the actions of investigators in search of any impropriety. FISA wiretaps are covert; the targets typically will never learn that they occurred. FISA judges and legislators may be informed, at least in a summary way, about what surveillance was undertaken and what targeting methods were used, but especially if those methods are of the technologically sophisticated type I alluded to above, they are likely to have little choice but to defer to investigators on questions of their accuracy and specificity. Even assuming total honesty by the investigators, judges may not think to question whether a method of pattern analysis that is precise and accurate when applied (say) within a single city or metro area will be as precise at the national level, or whether, given changing social behavior, a method that was precise last year will also be precise next year. Does it matter if an Internet service initially used by a few thousands—including, perhaps, surveillance targets—comes to be embraced by millions? Precisely because the surveillance is so secretive, it is incredibly hard to know which concerns are urgent and which are not really a problem, let alone how to think about addressing the ones that merit some legislative response.

I nevertheless intend to give it a shot in a broader paper on modern surveillance I’m working on, but for the moment I’ll just say: “It’s tricky.”  What is absolutely essential to take away from this, though, is that these loose and lazy analogies to roving wiretaps in criminal investigations are utterly unhelpful in thinking about the specific problems of roving FISA surveillance. That investigators have long been using “these” powers under Title III is no answer at all to the questions that arise here. Legislators who invoke that fact as though it should soothe every civil libertarian brow are simply evading their responsibilities.