Tag: Fourth Amendment

Internet Privacy Law Needs an Upgrade

Imagine for a moment that all your computing devices had to run on code that had been written in 1986. Your smartphone is, alas, entirely out of luck, but your laptop or desktop computer might be able to get online using a dial-up modem. But you’d better be happy with a command-line interface to services like e-mail, Usenet, and Telnet, because the only “Web browsers” anyone’s heard of in 1986 are entomologists. Cloud computing? Location based services? Social networking? No can do, though you can still get into a raging debate about the relative merits of Macs and PCs.

When it comes to federal privacy law, alas, we are running on code written in 1986: The Elecronic Communications Privacy Act, a statute that’s not only ludicrously out of date, but so notoriously convoluted and unclear that even legal experts routinely lament the “mess” of electronic privacy law. Scholar Orin Kerr has called it “famously complex, if not entirely impenetrable.” Part of the problem, to be sure, lies with the courts.  It is scandalous that in 2010, we don’t even have a definitive ruling on whether or when the Fourth Amendment requires the government to get a search warrant to read e-mails stored on a server. But the ECPA statute, meant to fill the gap left by the courts, reads like the rules of James T. Kirk’s fictional card game Fizzbin.

Suppose the police want to read your e-mail. To come into your home and look through your computer, of course, they’d need a full Fourth Amendment search warrant based on probable cause. If they want to intercept the e-mail in transit, they have to go still further and meet the “super-warrant” standards of the Wiretap Act. Once it lands on your Internet Service Provider’s server, a regular search warrant is once again the standard—assuming your ISP is providing access “to the public.” If it’s a more closed network like your work account, your employer is permitted to voluntarily hand it over. But if you read the e-mail, or leave it on the server for more than 180 days, then suddenly your ISP has become a “remote computing service” provider rather than an “electronic communications service provider” vis a vis that e-mail. So instead of a probable cause warrant, police can get a 2703(d) order based on “specific and articulable facts” showing the information is “relevant and material” to an investigation—a much lower standard—provided they notify you. Except they can ask a judge to delay notification if they think that would impede the investigation. Oh, unless your ISP is in the Ninth Circuit, where opened e-mails still get the higher level of protection until they’ve “expired in the normal course,” whatever that means.

That’s for e-mail contents.  But maybe they don’t actually need to read your e-mail; maybe they just want some “metadata”—the equivalent of scanning the envelopes of physical letters—to see if your online activity is suspicious enough to warrant a closer look.  Well, then they can get what’s called a pen/trap order based on a mere certification to a judge of “relevance” to capture that information in realtime, but without having to provide any of those “specific and articulable facts.” Unless it’s information that would reveal your location—maybe because you’re e-mailing from your smartphone—in which case, well, the law doesn’t really say, but the Justice Department thinks a pen/trap order plus one of those 2703(d) orders will do, unless it’s really specific location information, at which point they get a warrant. If they want to get those records after the fact, it’s one of those 2703(d) orders—again, unless a non-public provider like your school or employer wants to volunteer them. Oh, unless it’s a counterterror investigation, and the FBI thinks your records might be “relevant” somehow, in which case they can get them with a National Security letter, without getting a judge involved at all.

Dizzy yet? Well, a movement launched today with the aim of dragging our electronic privacy law, kicking and screaming, into the 21st century: The Digital Due Process Coalition.  They’re pushing for a streamlined law that provides clear and consistent protection for sensitive information—the kind of common sense rules you’d have thought would already be in place.  If the government wants to read the contents of your letters, they should need a search warrant—regardless of the phase of the moon when an e-mail is acquired. If they want to track your location, they should need a warrant. And all that “metadata” can be pretty revealing in the digital age—maybe some stricter oversight is in order before they start vacuuming up all our IP logs.

Reforms like these are way overdue. You wouldn’t trust your most sensitive data to software code that hadn’t gone a few years without a security patch. Why would you trust it to legal code that hasn’t had a major patch in over two decades?

On Fourth Amendment Privacy: Everybody’s Wrong

Everybody’s wrong. That’s sort of the message I was putting out when I wrote my 2008 American University Law Review article entitled “Reforming Fourth Amendment Privacy Doctrine.”

A lot of people have poured a lot of effort into the “reasonable expectation of privacy” formulation Justice Harlan wrote about in his concurrence to the 1967 decision in U.S. v. Katz. But the Fourth Amendment isn’t about people’s expectations or the reasonableness of their expectations. It’s about whether, as a factual matter, they have concealed information from others—and whether the government is being reasonable in trying to discover that information.

The upshot of the “reasonable expectation of privacy” formulation is that the government can argue—straight-faced—that Americans don’t have a Fourth Amendment interest in their locations throughout the day and night because data revealing it is produced by their mobile phones’ interactions with telecommunications providers, and the telecom companies have that data.

I sat down with podcaster extraordinaire Caleb Brown the other day to talk about all this. He titled our conversation provocatively: “Should the Government Own Your GPS Location?

School Webcams and Strange Gaps in Surveillance Law

Last week, I noted the strange story of a lawsuit filed by parents who allege that their son was spied on by school officials who used security software capable of remotely activating the webcams in laptops distributed to students. A bit more information on that case has since come out. The school district has issued a statement which doesn’t get into the details of the case, but avers that the remote camera capability has only ever been used in an effort to locate laptops believed to have been lost or stolen. (That apparently includes a temporary “loaner computer that, against regulations, might be taken off campus.”)  They do, however, acknowledge that they erred in failing to notify parents about this capability.  The lawyer for the student plaintiff is now telling reporters that school officials called his client in to the vice principal’s office when they mistook his Mike and Ike candies for illegal drugs.

Perhaps most intriguingly, a security blogger has done some probing into the technical capabilities of the surveillance software used by the school district. The blogger also rounds up comments from self-identified students of the high school, many of whom claim that they noticed the webcam light on their school-issued laptops flickering on and off—behavior they were told was a “glitch”—which may provide some reason to question the school’s assertion that this capability was only activated in a handful of cases to locate lost laptops. The FBI, meanwhile, has reportedly opened an investigation to see whether any federal wiretap laws may have been violated.

It’s this last item I want to call attention to. The complaint against the school district states a number of causes of action.  The most obvious one—which sounds to me like a slam dunk—is a Fourth Amendment claim. But there are also a handful of claims under federal wiretapping statutes, specifically the Electronic Communications Privacy Act and the Stored Communications Act. These are more dubious, and rest on the premise that the webcam image was an “electronic communication” that school officials “intercepted” (as those terms are used in the statute), or alternatively that  the activation of the security software involved “unauthorized” access by the school to its own laptop. The trouble is that courts considering similar claims in the past have held that federal electronic surveillance law does not cover silent video surveillance—or rather, the criminal wiretap statutes don’t.

That leads to a strange asymmetry in a couple of different ways. First, intelligence surveillance covered by the Foreign Intelligence Surveillance Act does include silent video monitoring. Second, it seems to provide less protection for a type of monitoring that is arguably still more intrusive. If officials had turned on the laptop’s microphone, that would fall under ECPA’s prohibition on intercepts of “oral communications.” And if the student had been engaged in a video chat using software like Skype, that would clearly constitute an “electronic communication,” even if the audio were not intercepted. But at least in the cases I’m familiar with, the courts have declined to apply that label to surreptitiously recorded silent video—which one might think would be the most invasive of all, given that the target is completely unaware of being observed by anybody.

One final note: The coverage I’m seeing is talking about this as though it involves one school doing something highly unusual. It’s not remotely clear to me that this is the case. We know that at least one other school district employs similar monitoring software, and a growing number of districts are experimenting with issuing laptops to students. I’d like to see reporters start calling around and find out just how many schools are supplying kids with potential telescreens.

Government-Mandated Spying on Bank Customers Undermines both Privacy and Law Enforcement

I recently publicized an interesting map showing that so-called tax havens are not hotbeds of dirty money. A more fundamental question is whether anti-money laundering laws are an effective way of fighting crime – particularly since they substantially undermine privacy.

In this new six-minute video, I ask whether it’s time to radically rethink a system that costs billions of dollars each year, forces banks to snoop on their customers, and misallocates law enforcement resources.

The Government Can Monitor Your Location All Day Every Day Without Implicating Your Fourth Amendment Rights

If you have a mobile phone, that’s the upshot of an argument being put forward by the government in a case being argued before the Third Circuit Court of Appeals tomorrow. The case is called In the Matter of the Application of the United States of America For An Order Directing A Provider of Electronic Communication Service To Disclose Records to the Government.

Declan McCullagh reports:

In that case, the Obama administration has argued that Americans enjoy no “reasonable expectation of privacy” in their—or at least their cell phones’—whereabouts. U.S. Department of Justice lawyers say that “a customer’s Fourth Amendment rights are not violated when the phone company reveals to the government its own records” that show where a mobile device placed and received calls.

The government can maintain this position because of the retrograde “third party doctrine.” That doctrine arose from a pair of cases in the early 1970s in which the Supreme Court found no Fourth Amendment problems when the government required service providers to maintain records about their customers, and later required those service providers to hand the records over to the government.

I wrote about these cases, and the courts’ misunderstanding of privacy since 1967’s Katz decision, in an American University Law Review article titled “Reforming Fourth Amendment Privacy Doctrine”:

These holdings were never right, but they grow more wrong with each step forward in modern, connected living. Incredibly deep reservoirs of information are constantly collected by third-party service providers today. Cellular telephone networks pinpoint customers’ locations throughout the day through the movement of their phones. Internet service providers maintain copies of huge swaths of the information that crosses their networks, tied to customer identifiers. Search engines maintain logs of searches that can be correlated to specific computers and usually the individuals that use them. Payment systems record each instance of commerce, and the time and place it occurred. The totality of these records are very, very revealing of people’s lives. They are a window onto each individual’s spiritual nature, feelings, and intellect. They reflect each American’s beliefs, thoughts, emotions, and sensations. They ought to be protected, as they are the modern iteration of our “papers and effects.”

This is a case to watch, as it will help determine whether or not your digital life is an open book to government investigators.

The Virtual Fourth Amendment

I’ve just gotten around to reading Orin Kerr’s fine paper “Applying the Fourth Amendment to the Internet: A General Approach.”  Like most everything he writes on the topic of technology and privacy, it is thoughtful and worth reading.  Here, from the abstract, are the main conclusions:

First, the traditional physical distinction between inside and outside should be replaced with the online distinction between content and non-content information. Second, courts should require a search warrant that is particularized to individuals rather than Internet accounts to collect the contents of protected Internet communications. These two principles point the way to a technology-neutral translation of the Fourth Amendment from physical space to cyberspace.

I’ll let folks read the full arguments to these conclusions in Orin’s own words, but I want to suggest a clarification and a tentative objection.  The clarification is that, while I think the right level of particularity is, broadly speaking, the person rather than the account, search warrants should have to specify in advance either the accounts covered (a list of e-mail addresses) or the method of determining which accounts are covered (“such accounts as the ISP identifies as belonging to the target,” for instance).  Since there’s often substantial uncertainty about who is actually behind a particular online identity, the discretion of the investigator in making that link should be constrained to the maximum practicable extent.

The objection is that there’s an important ambiguity in the physical-space “inside/outside” distinction, and how one interprets it matters a great deal for what the online content/non-content distinction amounts to. The crux of it is this: Several cases suggest that surveillance conducted “outside” a protected space can nevertheless be surveillance of the “inside” of that space. The grandaddy in this line is, of course, Katz v. United States, which held that wiretaps and listening devices may constitute a “search” though they do not involve physical intrusion on private property. Kerr can accomodate this by noting that while this is surveillance “outside” physical space, it captures the “inside” of communication contents. But a greater difficulty is presented by another important case, Kyllo v. United States, with which Kerr deals rather too cursorily.

In Kyllo, the majority—led, perhaps surprisingly, by Justice Scalia!—found that the use without a warrant of a thermal imaging scanner to detect the use of marijuana growing lights in a private residence violated the Fourth Amendment. As Kerr observes, the crux of the disagreement between the majority and the dissent had to do with whether the scanner should be considered to be gathering private information about the interior of the house, or whether it only gathered information (about the relative warmth of certain areas of the house) that might have been obtained by ordinary observation from the exterior of the house.  No great theoretical problem, says Kerr: That only shows that the inside/outside line will sometimes be difficult to draw in novel circumstances. Online, for instance, we may be unsure whether to regard the URL of a specific Web page as mere “addressing” information or as “content”—first, because it typically makes it trivial to learn the content of what a user has read, and second, because URLs often contain the search terms manually entered by users. A similar issue arose with e-mail subject lines, which now seem by general consensus to be regarded as “content” even though they are transmitted in the “header” of an e-mail.

Focus on this familiar (if thorny) line drawing problem, however, misses what is important about the Kyllo case, and the larger problem it presents for Kerr’s dichotomy: Both the majority and the dissent seemed to agree that a more sophisticated scanner capable of detecting, say, the movements of persons within the house, would have constituted a Fourth Amendment search. But reflect, for a moment, on what this means given the way thermal imaging scanners operate. Infrared radiation emitted by objects within the house unambiguously ends up “outside” the house: A person standing on the public street cannot help but absorb some of it. What all the justices appeared to agree on, then, is that the collection and processing of information that is unambiguously outside the house, and is conducted entirely outside the house, may nevertheless amount to a search because it is surveillance of and yields information about the inside of the house. This means that there is a distinction between the space where information is acquired and the space about which it is acquired.

This matters for Kerr’s proposed content/non-content distinction, because in very much the same way, sophisticated measurement and analysis of non-content information may well yield information about content. A few examples may help to make this clear. Secure Shell (SSH) is an encrypted protocol for secure communications. In its interactive mode, SSH transmits each keystroke as a distinct packet—and this packet transmission information is non-content information of the sort that might be obtained, say, using a so-called pen/trap order, issued using a standard of mere “relevance” to an investigation, rather than the “probable cause” required for a full Fourth Amendment search—the same standard Kerr agrees should apply to communications. Yet there are strong and regular patterns in the way human beings type different words on a standard keyboard, such that the content of what is typed—under SSH or any realtime chat protocol that transmits each keystroke as a packet—may be deducible from the non-content packet transmission data given sufficiently advanced analytic algorithms. The analogy to the measurement and analysis of infrared radiation in Kyllo is, I think, quite strong.

It is not hard to come up with a plethora of similar examples. By federal statute, records of the movies a person rents enjoy substantial privacy protection, and the standard for law enforcement to obtain them—probable cause showing of “relevance” and prior notice to the consumer—is higher than required for a mere pen/trap. Yet precise analysis of the size of a file transmitted from a service like Netflix or iTunes could easily reveal either the specific movie or program downloaded, or at the least narrow it down to a reasonably small field of possibilities. Logs of the content-sensitive advertising served by a service like Gmail to a particular user may reveal general information about the contents of user e-mails. Sophisticated social network analysis based on calling or e-mailing patterns of multiple users may reveal, not specific communications contents, but information about the membership and internal structure of various groups and organizations. That amounts to revealing the “contents” of group membership lists, which could have profound First Amendment implications in light of a string of Supreme Court precedents making it clear that state compelled disclosure of such lists may impermissibly burden the freedom of expressive association even when it does not run afoul of Fourth Amendment privacy protections. And running back to Kyllo, especially as “smart” appliances and ubiquitous networked computing become more pervasive, analysis of non-content network traffic may reveal enormous amounts of information about the movements and activities of people within private homes.

Here’s one way to describe the problem here: The combination of digital technology and increasingly sophisticated analytic methods have complicated the intuitive link between what is directly observed or acquired and what is ultimately subject to surveillance in a broader sense. The natural move here is to try to draw a distinction between what is directly “acquired” and what is learned by mere “inference” from the information acquired. I doubt such a distinction will hold up. It takes a lot of sophisticated processing to turn ambient infrared radiation into an image of the interior of a home; the majority in Kyllo was not sympathetic to the argument that this was mere “inference.” Strictly speaking, after all, the data pulled off an Internet connection is nothing but a string of ones and zeroes. It is only a certain kind of processing that renders it as the text of an e-mail or an IM transcript. If a different sort of processing can derive the same transcript—or at least a fair chunk of it—from the string of ones and zeroes representing packet transmission timing, should we presume there’s a deep constitutional difference?

That is not to say there’s anything wrong with Kerr’s underyling intuition.  But it does, I think, suggest that new technologies will increasingly demand that privacy analysis not merely look at what is acquired but at what is done with it. In a way, the law’s hyperfocus on the moment of acquisition as the unique locus of Fourth Amendment blessing or damnation is the shadow of the myopically property-centric jurisprudence the Court finally found to be inadequate in Katz. As Kerr intimates in his paper, shaking off the digital echoes of that legacy—with its convenient bright lines—is apt to make things fiendishly complex, at least in the initial stages.  But I doubt it can be avoided much longer.

Some Thoughts on the New Surveillance

Last night I spoke at “The Little Idea,” a mini-lecture series launched in New York by Ari Melber of The Nation and now starting up here in D.C., on the incredibly civilized premise that, instead of some interminable panel that culminates in a series of audience monologues-disguised-as-questions, it’s much more appealing to have a speaker give a ten-minute spiel, sort of as a prompt for discussion, and then chat with the crowd over drinks.

I’d sketched out a rather longer version of my remarks in advance just to make sure I had my main ideas clear, and so I’ll post them here, as a sort of preview of a rather longer and more formal paper on 21st century surveillance and privacy that I’m working on. Since ten-minute talks don’t accommodate footnotes very well, I should note that I’m drawing for a lot of these ideas on the excellent work of legal scholars Lawrence Lessig and Daniel Solove (relevant papers at the links). Anyway, the expanded version of my talk after the jump:

Since this is supposed to be an event where the drinking is at least as important as the talking, I want to begin with a story about booze—the story of a guy named Roy Olmstead.  Back in the days of Prohibition, Roy Olmstead was the youngest lieutenant on the Seattle police force. He spent a lot of his time busting liquor bootleggers, and in the course of his duties, he had two epiphanies. First, the local rum runners were disorganized—they needed a smart kingpin who’d run the operation like a business. Second, and more importantly, he realized liquor smuggling paid a lot better than police work.

So Roy Olmstead decided to change careers, and it turned out he was a natural. Within a few years he had remarried to a British debutante, bought a big white mansion, and even ran his own radio station—which he used to signal his ships, smuggling hooch down from Canada, via coded messages hidden in broadcasts of children’s bedtime stories. He did retain enough of his old ethos, though, that he forbade his men from carrying guns. The local press called him the Bootleg King of Puget Sound, and his parties were the hottest ticket in town.

Roy’s success did not go unnoticed, of course, and soon enough the feds were after him using their own clever high-tech method: wiretapping. It was so new that they didn’t think they needed to get a court warrant to listen in on phone conversations, and so when the hammer came down, Roy Olmstead challenged those wiretaps in a case that went all the way to the Supreme Court—Olmstead v. U.S.

The court had to decide whether these warrantless wiretaps had violated the Fourth Amendment “right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures.” But when the court looked at how a “search” had traditionally been defined, they saw that it was tied to the common law tort of trespass. Originally, that was supposed to be your remedy if you thought your rights had been violated, and a warrant was a kind of shield against a trespass lawsuit. So the majority didn’t see any problem: “There was no search,” they wrote, “there was no seizure.” Because a search was when the cops came on to your property, and a seizure was when they took your stuff. This was no more a search than if the police had walked by on the sidewalk and seen Roy unpacking a crate of whiskey through his living room window: It was just another kind of non-invasive observation.

So Olmstead went to jail, and came out a dedicated evangelist for Christian Science. It wasn’t until the year after Olmstead died, in 1967, that the Court finally changed its mind in a case called Katz v. U.S.: No, they said, the Fourth Amendment protects people and not places, and so instead of looking at property we’re going to look at your reasonable expectation of privacy, and on that understanding, wiretaps are a problem after all.

So that’s a little history lesson—great, so what? Well, we’re having our own debate about surveillance as Congress considers not just reauthorization of some expiring Patriot Act powers, but also reform of the larger post-9/11 surveillance state, including last year’s incredibly broad amendments to the Foreign Intelligence Surveillance Act. And I see legislators and pundits repeating two related types of mistakes—and these are really conceptual mistakes, not legal mistakes—that we can now, with the benefit of hindsight, more easily recognize in the logic of Olmstead: One is a mistake about technology; the other is a mistake about the value of privacy.

First, the technology mistake. The property rule they used in Olmstead was founded on an assumption about the technological constraints on observation. The goal of the Fourth Amendment was to preserve a certain kind of balance between individual autonomy and state power. The mechanism for achieving that goal was a rule that established a particular trigger or tripwire that would, in a sense, activate the courts when that boundary was crossed in order to maintain the balance. Establishing trespass as the trigger made sense when the sphere of intimate communication was coextensive with the boundaries of your private property. But when technology decoupled those two things, keeping the rule the same no longer preserved the balance, the underlying goal, in the same way, because suddenly you could gather information that once required trespass without hitting that property tripwire.

The second and less obvious error has to do with a conception of the value of privacy, and a corresponding idea of what a privacy harm looks like.  You could call the Olmstead court’s theory “Privacy as Seclusion,” where the paradigmatic violation is the jackboot busting down your door and disturbing the peace of your home. Wiretapping didn’t look like that, and so in one sense it was less intrusive—invisible, even. In another sense, it was more intrusive because it was invisible: Police could listen to your private conversations for months at a time, with you none the wiser. The Katz court finally understood this; you could call their theory Privacy as Secrecy, where the harm is not intrusion but disclosure.

But there’s an even less obvious potential harm here. If they didn’t need a warrant, everyone who made a phone call would know that they could whenever they felt like it. Wiretapping is expensive and labor intensive enough that realistically they can only be gathering information about a few people at a time.  But if further technological change were to remove that constraint, then the knowledge of the permanent possibility of surveillance starts having subtle effects on people’s behavior—if you’ve seen the movie The Lives of Others you can see an extreme case of an ecology of constant suspicion—and that persists whether or not you’re actually under surveillance.  To put it in terms familiar to Washingtonians: Imagine if your conversations had to be “on the record” all the time. Borrowing from Michel Foucault, we can say the privacy harm here is not (primarily) invasion or disclosure but discipline. This idea is even embedded in our language: When we say we want to control and discipline these police powers, we talk about the need for over-sight and super-vision, which are etymologically basically the same word as sur-veillance.

Move one more level from the individual and concrete to the abstract and social harms, and you’ve got the problem (or at least the mixed blessing) of what I’ll call legibility. The idea here is that the longer term possibilities of state control—the kinds of power that are even conceivable—are determined in the modern world by the kind and quantity of information the modern state has, not about discrete individuals, but about populations.  So again, to reach back a few decades, the idea that maybe it would be convenient to round up all the Americans of Japanese ancestry—or some other group—and put them in internment camps is just not even on the conceptual menu unless you have a preexisting informational capacity to rapidly filter and locate your population that way.

Now, when we talk about our First Amendment right to free speech, we understand it has a certain dual character: That there’s an individual right grounded in the equal dignity of free citizens that’s violated whenever I’m prohibited from expressing my views. But also a common or collective good that is an important structural precondition of democracy. As a citizen subject to democratic laws, I have a vested interest in the freedom of political discourse whether or not I personally want to say–or even listen to–controversial speech. Looking at the incredible scope of documented intelligence abuses from the 60s and 70s, we can add that I have an interest in knowing whether government officials are trying to silence or intimidate inconvenient journalists, activists, or even legislators. Censorship and arrest are blunt tactics I can see and protest; blackmail or a calculated leak that brings public disgrace are not so obvious. As legal scholar Bill Stuntz has argued, the Founders understood the structural value of the Fourth Amendment as a complement to the First, because it is very hard to make it a crime to pray the wrong way or to discuss radical politics if the police can’t arbitrarily see what people are doing or writing in their homes.

Now consider how we think about our own contemporary innovations in search technology. The marketing copy claims PATRIOT and its offspring “update” investigative powers for the information age—but what we’re trying to do is stretch our traditional rules and oversight mechanisms to accommodate search tools as radically novel now as wiretapping was in the 20s. On the traditional model, you want information about a target’s communications and conduct, so you ask a judge to approve a method of surveillance, using standards that depend on how intrusive the method is and how secret and sensitive the information is. Constrained by legal rulings from a very different technological environment, this model assumes that information held by third parties—like your phone or banking or credit card information—gets very little protection, since it’s not really “secret” anymore. And the sensitivity of all that information is evaluated in isolation, not in terms of the story that might emerge from linking together all the traces we now inevitable leave in the datasphere every day.

The new surveillance typically seeks to observe information about conduct and communications in order to identify targets. That may mean using voiceprint analysis to pull matches for a particular target’s voice or a sufficiently unusual regional dialect in a certain area. It may mean content analysis to flag e-mails or voice conversations containing known terrorist code phrases. It may mean social graph analysis to reidentify targets who have changed venues by their calling patterns.  If you’re on Facebook, and a you and bunch of your friends all decide to use fake names when you sign up for Twitter, I can still reidentify you given sufficient computing power and strong algorithms by mapping the shape of the connections between you—a kind of social fingerprinting. It can involve predictive analysis based on powerful electronic “classifiers” that extract subtle patterns of travel or communication or purchases common to past terrorists in order to write their own algorithms for detecting potential ones.

Bracket for the moment whether we think some or all of these methods are wise.  It should be crystal clear that a method of oversight designed for up front review and authorization of target-based surveillance is going to be totally inadequate as a safeguard for these new methods.  It will either forbid them completely or be absent from the parts of the process where the dangers to privacy exist. In practice what we’ve done is shift the burden of privacy protection to so-called “minimization” procedures that are meant to archive or at least anonymize data about innocent people. But those procedures have themselves been rendered obsolete by technologies of retrieval and reidentification: No sufficiently large data set is truly anonymous.

And realize the size of the data sets we’re talking about. The FBI’s Information Data Warehouse holds at least 1.5 billion records, and growing fast, from an array of private and government sector sources—some presumably obtained using National Security Letters and Patriot 215 orders, some by other means. Those NSLs are issued by the tens of thousands each year, mostly for information about Americans.  As of 2006, we know “some intelligence sources”—probably NSA’s—were  growing at a rate of 4 petabytes, that’s 4 million Gigabytes—each month.  Within about five years, NSA’s archive is expected to be measured in Yottabytes—if you want to picture one Yottabyte, take the sum total of all data on the Internet—every web page, audio file, and video—and multiply it by 2,000. At that point they will have to make up a new word for the next largest unit of data.  As J. Edgar Hoover understood all too well, just having that information is a form of power. He wasn’t the most feared man in Washington for decades because he necessarily had something on everyone—though he had a lot—but because he had so much that you really couldn’t be sure what he had on you.

There is, to be sure, a lot to be said against the expansion of surveillance powers over the past eight years from a more conventional civil liberties perspective.  But we also need to be aware that if we’re not attuned to the way new technologies may avoid our would tripwires, if we only think of privacy in terms of certain familiar, paradigmatic violations—the boot in the door—then like the Olmstead court, we may render ourselves blind to equally serious threats that don’t fit our mental picture of a privacy harm.

If we’re going to avoid this, we need to attune ourselves to the ways modern surveillance is qualitatively different from past search tools, even if words like “wiretap” and “subpoena” remain the same. And we’re going to need to stop thinking only in terms of isolated violations of individual rights, but also consider the systemic and structural effects of the architectures of surveillance we’re constructing.