Make sure you watch and share this video revealing the actual conduct of New York City’s stop-and-frisk policy.
Make sure you watch and share this video revealing the actual conduct of New York City’s stop-and-frisk policy.
The Electronic Frontier Foundation trumpets a surprising privacy win last week in the U.S. Court of Appeals for the D.C. Circuit. In U.S. v. Maynard (PDF), the court held that the use of a GPS tracking device to monitor the public movements of a vehicle—something the Supreme Court had held not to constitute a Fourth Amendment search in U.S. v Knotts—could nevertheless become a search when conducted over an extended period. The Court in Knotts had considered only tracking that encompassed a single journey on a particular day, reasoning that the target of surveillance could have no “reasonable expectation of privacy” in the fact of a trip that any member of the public might easily observe. But the Knotts Court explicitly reserved judgment on potential uses of the technology with broader scope, recognizing that “dragnet” tracking that subjected large numbers of people to “continuous 24-hour surveillance.” Here, the DC court determined that continuous tracking for a period of over a month did violate a reasonable expectation of privacy—and therefore constituted a Fourth Amendment search requiring a judicial warrant—because such intensive secretive tracking by means of public observation is so costly and risky that no reasonable person expects to be subject to such comprehensive surveillance.
Perhaps ironically, the court’s logic here rests on the so-called “mosaic theory” of privacy, which the government has relied on when resisting Freedom of Information Act requests. The theory holds that pieces of information that are not in themselves sensitive or potentially injurious to national security can nevertheless be withheld, because in combination (with each other or with other public facts) permit the inference of facts that are sensitive or secret. The “mosaic,” in other words, may be far more than the sum of the individual tiles that constitute it. Leaving aside for the moment the validity of the government’s invocation of this idea in FOIA cases, there’s an obvious intuitive appeal to the idea, and indeed, we see that it fits our real world expectations about privacy much better than the cruder theory that assumes the sum of “public” facts must always be itself a public fact.
Consider an illustrative hypothetical. Alice and Bob are having a romantic affair that, for whatever reason, they prefer to keep secret. One evening before a planned date, Bob stops by the corner pharmacy and—in full view of a shop full of strangers—buys some condoms. He then drives to a restaurant where, again in full view of the other patrons, they have dinner together. They later drive in separate cars back to Alice’s house, where the neighbors (if they care to take note) can observe from the presence of the car in the driveway that Alice has an evening guest for several hours. It being a weeknight, Bob then returns home, again by public roads. Now, the point of this little story is not, of course, that a judicial warrant should be required before an investigator can physically trail Bob or Alice for an evening. It’s simply that in ordinary life, we often reasonably suppose the privacy or secrecy of certain facts—that Bob and Alice are having an affair—that could in principle be inferred from the combination of other facts that are (severally) clearly public, because it would be highly unusual for all of them to be observed by the same public. Even more so when, as in Maynard, we’re talking not about the “public” events of a single evening, but comprehensive observation over a period of weeks or months. One must reasonably expect that “anyone” might witness any of such a series of events; it does not follow that one cannot reasonably expect that no particular person or group would be privy to all of them. Sometimes, of course, even our reasonable expectations are frustrated without anyone’s rights being violated: A neighbor of Alice’s might by chance have been at the pharmacy and then at the restaurant. But as the Supreme Court held in Kyllo v US, even when some information might in principle be possible to obtain public observation, the use of technological means not in general public use to learn the same facts may nevertheless qualify as a Fourth Amendment search, especially when the effect of technology is to render easy a degree of monitoring that would otherwise be so laborious and costly as to normally be infeasible.
Now, as Orin Kerr argues at the Volokh Conspiracy, significant as the particular result in this case might be, it’s the approach to Fourth Amendment privacy embedded here that’s the really big story. Orin, however, thinks it a hopelessly misguided one—and the objections he offers are all quite forceful. Still, I think on net—especially as technology makes such aggregative monitoring more of a live concern—some kind of shift to a “mosaic” view of privacy is going to be necessary to preserve the practical guarantees of the Fourth Amendment, just as in the 20th century a shift from a wholly property-centric to a more expectations-based theory was needed to prevent remote sensing technologies from gutting its protections. But let’s look more closely at Orin’s objections.
First, there’s the question of novelty. Under the mosaic theory, he writes:
[W]hether government conduct is a search is measured not by whether a particular individual act is a search, but rather whether an entire course of conduct, viewed collectively, amounts to a search. That is, individual acts that on their own are not searches, when committed in some particular combinations, become searches. Thus in Maynard, the court does not look at individual recordings of data from the GPS device and ask whether they are searches. Instead, the court looks at the entirety of surveillance over a one-month period and views it as one single “thing.” Off the top of my head, I don’t think I have ever seen that approach adopted in any Fourth Amendment case.
I can’t think of one that explicitly adopts that argument. But consider again the Kyllo case mentioned above. Without a warrant, police used thermal imaging technology to detect the presence of marijuana-growing lamps within a private home from a vantage point on a public street. In a majority opinion penned by Justice Scalia, the court balked at this: The scan violated the sanctity and privacy of the home, though it involved no physical intrusion, by revealing the kind of information that might trigger Fourth Amendment scrutiny. But stop and think for a moment about how thermal imaging technology works, and try to pinpoint where exactly the Fourth Amendment “search” occurs. The thermal radiation emanating from the home was, well… emanating from the home, and passing through or being absorbed by various nearby people and objects. It beggars belief to think that picking up the radiation could in itself be a search—you can’t help but do that!
When the radiation is actually measured, then? More promising, but then any use of an infrared thermometer within the vicinity of a home might seem to qualify, whether or not the purpose of the user was to gather information about the home, and indeed, whether or not the thermometer was precise enough to reveal any useful information about internal temperature variations within the home. The real privacy violation here—the disclosure of private facts about the interior of the home—occurs only when a series of very many precise measurements of emitted radiation are processed into a thermographic image. To be sure, it is counterintuitive to describe this as a “course of conduct” because the aggregation and analysis are done quite quickly within the processor of the thermal camera, which makes it natural to describe the search as a single act: Creating a thermal image. But if we zoom in, we find that what the Court deemed an unconstitutional invasion of privacy was ultimately the upshot of a series of “public” facts about ambient radiation levels, combined and analyzed in a particular way. The thermal image is, in a rather literal sense, a mosaic.
The same could be said about long-distance spy microphones: Vibrating air is public; conversations are private. Or again, consider location tracking, which is unambiguously a “search” when it extends to private places: It might be that what is directly measured is only the “public” fact about the strength of a particular radio signal at a set of receiver sites; the “private” facts about location could be described as a mere inference, based on triangulation analysis (say), from the observable public facts.
There’s also a scope problem. When, precisely, do individual instances of permissible monitoring become a search requiring judicial approval? That’s certainly a thorny question, but it arises as urgently in the other type of hypothetical case alluded to in Knotts, involving “dragnet” surveillance of large numbers of individuals over time. Here, too, there’s an obvious component of duration: Nobody imagines that taking a single photograph revealing the public locations of perhaps hundreds of people at a given instant constitutes a Fourth Amendment search. And just as there’s no precise number of grains of sand that constitutes a “heap,” there’s no obvious way to say exactly what number of people, observed for how long, are required to distinguish individualized tracking from “dragnet” surveillance. But if we anchor ourselves in the practical concerns motivating the adoption of the Fourth Amendment, it seems clear enough that an interpretation that detected no constitutional problem with continuous monitoring of every public movement of every citizen would mock its purpose. If we accept that much, a line has to be drawn somewhere. As I recall, come to think of it, Orin has himself proposed a procedural dichotomy between electronic searches that are “person-focused” and those that are “data-focused.” This approach has much to recommend it, but is likely to present very similar boundary-drawing problems.
Orin also suggests that the court improperly relies upon a “probabilistic” model of the Fourth Amendment here (looking to what expectations about monitoring are empirically reasonable) whereas the Court has traditionally relied on a “private facts” model to deal with cases involving new technologies (looking to which types of information it is reasonable to consider private by their nature). Without recapitulating the very insightful paper linked above, the boundaries between models in Orin’s highly useful schema do not strike me as quite so bright. The ruling in Kyllo, after all, turned in part on the fact that infrared imaging devices are not in “general public use,” suggesting that the identification of “private facts” itself has an empirical and probabilistic component. The analyses aren’t really separate. What’s crucial to bear in mind is that there are always multiple layers of facts involved with even a relatively simple search: Facts about the strength of a particular radio signal, facts about a location in a public or private place at a particular instant, facts about Alice and Bob’s affair. In cases involving new technologies, the problem—though seldom stated explicitly—is often precisely which domain of facts to treat as the “target” of the search. The point of the expectations analysis in Maynard is precisely to establish that there is a domain of facts about macro-level behavioral patterns distinct from the unambiguously public facts about specific public movements at particular times, and that we have different attitudes about these domains.
Sorting all this out going forward is likely to be every bit as big a headache as Orin suggests. But if the Fourth Amendment has a point—if it enjoins us to preserve a particular balance between state power and individual autonomy—then as technology changes, its rules of application may need to get more complicated to track that purpose, as they did when the Court ruled that an admirably simple property rule was no longer an adequate criterion for identifying a “search.” Otherwise we make Fourth Amendment law into a cargo cult, a set of rituals whose elegance of form is cold consolation for their abandonment of function.
If you have a mobile phone, that’s the upshot of an argument being put forward by the government in a case being argued before the Third Circuit Court of Appeals tomorrow. The case is called In the Matter of the Application of the United States of America For An Order Directing A Provider of Electronic Communication Service To Disclose Records to the Government.
In that case, the Obama administration has argued that Americans enjoy no “reasonable expectation of privacy” in their—or at least their cell phones’—whereabouts. U.S. Department of Justice lawyers say that “a customer’s Fourth Amendment rights are not violated when the phone company reveals to the government its own records” that show where a mobile device placed and received calls.
The government can maintain this position because of the retrograde “third party doctrine.” That doctrine arose from a pair of cases in the early 1970s in which the Supreme Court found no Fourth Amendment problems when the government required service providers to maintain records about their customers, and later required those service providers to hand the records over to the government.
I wrote about these cases, and the courts’ misunderstanding of privacy since 1967’s Katz decision, in an American University Law Review article titled “Reforming Fourth Amendment Privacy Doctrine”:
These holdings were never right, but they grow more wrong with each step forward in modern, connected living. Incredibly deep reservoirs of information are constantly collected by third-party service providers today. Cellular telephone networks pinpoint customers’ locations throughout the day through the movement of their phones. Internet service providers maintain copies of huge swaths of the information that crosses their networks, tied to customer identifiers. Search engines maintain logs of searches that can be correlated to specific computers and usually the individuals that use them. Payment systems record each instance of commerce, and the time and place it occurred. The totality of these records are very, very revealing of people’s lives. They are a window onto each individual’s spiritual nature, feelings, and intellect. They reflect each American’s beliefs, thoughts, emotions, and sensations. They ought to be protected, as they are the modern iteration of our “papers and effects.”
This is a case to watch, as it will help determine whether or not your digital life is an open book to government investigators.
Last week, I wrote a piece for Reason in which I took a close look at the USA PATRIOT Act’s “lone wolf” provision—set to expire at the end of the year, though almost certain to be renewed—and argued that it should be allowed to lapse. Originally, I’d planned to survey the whole array of authorities that are either sunsetting or candidates for reform, but ultimately decided it made more sense to give a thorough treatment to one than trying to squeeze an inevitably shallow gloss on four or five complex areas of law into the same space. But the Internets are infinite, so I’ve decided I’d turn the Reason piece into Part I of a continuing series on PATRIOT powers. In this edition: Section 206, roving wiretap authority.
The idea behind a roving wiretap should be familiar if you’ve ever watched The Wire, where dealers used disposable “burner” cell phones to evade police eavesdropping. A roving wiretap is used when a target is thought to be employing such measures to frustrate investigators, and allows the eavesdropper to quickly begin listening on whatever new phone line or Internet account his quarry may be using, without having to go back to a judge for a new warrant every time. Such authority has long existed for criminal investigations—that’s “Title III” wiretaps if you want to sound clever at cocktail parties—and pretty much everyone, including the staunchest civil liberties advocates, seems to agree that it also ought to be available for terror investigations under the Foreign Intelligence Surveillance Act. So what’s the problem here?
To understand the reasons for potential concern, we need to take a little detour into the differences between electronic surveillance warrants under Title III and FISA. The Fourth Amendment imposes two big requirements on criminal warrants: “probable cause” and “particularity”. That is, you need evidence that the surveillance you’re proposing has some connection to criminal activity, and you have to “particularly [describe] the place to be searched and the persons or things to be seized.” For an ordinary non-roving wiretap, that means you show a judge the “nexus” between evidence of a crime and a particular “place” (a phone line, an e-mail address, or a physical location you want to bug). You will often have a named target, but you don’t need one: If you have good evidence gang members are meeting in some location or routinely using a specific payphone to plan their crimes, you can get a warrant to bug it without necessarily knowing the names of the individuals who are going to show up. On the other hand, though, you do always need that criminal nexus: No bugging Tony Soprano’s AA meeting unless you have some reason to think he’s discussing his mob activity there. Since places and communications facilities may be used for both criminal and innocent persons, the officer monitoring the facility is only supposed to record what’s pertinent to the investigation.
When the tap goes roving, things obviously have to work a bit differently. For roving taps, the warrant shows a nexus between the suspected crime and an identified target. Then, as surveillance gets underway, the eavesdroppers can go up on a line once they’ve got a reasonable belief that the target is “proximate” to a location or communications facility. It stretches that “particularity” requirement a bit, to be sure, but the courts have thus far apparently considered it within bounds. It may help that they’re not used with great frequency: Eleven were issued last year, all to state-level investigators, for narcotics and racketeering investigations.
Surveillance law, however, is not plug-and-play. Importing a power from the Title III context into FISA is a little like dropping an unfamiliar organism into a new environment—the consequences are unpredictable, and may well be dramatic. The biggest relevant difference is that with FISA warrants, there’s always a “target”, and the “probable cause” showing is not of criminal activity, but of a connection between that target and a “foreign power,” which includes terror groups like Al Qaeda. However, for a variety of reasons, both regular and roving FISA warrants are allowed to provide only a description of the target, rather than the target’s identity. Perhaps just as important, FISA has a broader definition of the “person” to be specified as a “target” than Title III. For the purposes of criminal wiretaps, a “person” means any “individual, partnership, association, joint stock company, trust, or corporation.” The FISA definition of “person” includes all of those, but may also be any “group, entity, …or foreign power.” Some, then, worry that roving authority could be used to secure “John Doe” warrants that don’t specify a particular location, phone line, or Internet account—yet don’t sufficiently identify a particular target either. Congress took some steps to attempt to address such concerns when they reauthorized Section 206 back in 2005, and other legislators have proposed further changes—which I’ll get to in a minute. But we actually need to understand a few more things about the peculiarities of FISA wiretaps to see why the risk of overbroad collection is especially high here.
In part because courts have suggested that the constraints of the Fourth Amendment bind more loosely in the foreign intelligence context, FISA surveillance is generally far more sweeping in its acquisition of information. In 2004, the FBI gathered some 87 years worth of foreign language audio recordings alone pursuant to FISA warrants. As David Kris (now assistant attorney general for the Justice Department’s National Security Division) explains in his definitive text on the subject, a FISA warrant typically “permits aquisition of nearly all information from a monitored facility or a searched location.” (This may be somewhat more limited for roving taps; I’ll return to the point shortly.) As a rare public opinion from the FISA Court put it in 2002: “Virtually all information seized, whether by electronic surveillance or physical search, is minimized hours, days, or weeks after collection.” The way this is supposed to be squared with the Fourth Amendment rights of innocent Americans who may be swept up in such broad interception is via those “minimization” procedures, employed after the fact to filter out irrelevant information.
That puts a fairly serious burden on these minimization procedures, however, and it’s not clear that they well bear it. First, consider the standard applied. The FISA Court explains that “communications of or concerning United States persons that could not be foreign intelligence information or are not evidence of a crime… may not be logged or summarized” (emphasis added). This makes a certain amount of sense: FISA intercepts will often be in unfamiliar languages, foreign agents will often speak in coded language, and the significance of a particular statement may not be clear initially. But such a deferential standard does mean they’re retaining an awful lot of data. And indeed, it’s important to recognize that “minimization” does not mean “deletion,” as the Court’s reference to “logs” and “summaries” hints. Typically intercepts that are “minimized” simply aren’t logged for easy retrieval in a database. In the 80s, this may have been nearly as good for practical purposes as deletion; with the advent of powerful audio search algorithms capable of scanning many hours of recording quickly for particular words or voices, it may not make much difference. And we know that much more material than is officially “retained” remains available to agents. In the 2003 case U.S. v. Sattar, pursuant to FISA surveillance, “approximately 5,175 pertinent voice calls .. were not minimized.” But when it came time for the discovery phase of a criminal trial against the FISA targets, the FBI “retrieved and disclosed to the defendants over 85,000 audio files … obtained through FISA surveillance.”
Cognizant of these concerns, Congress tried to add some safeguards in 2005 when they reauthorized the PATRIOT Act. FISA warrants are still permitted to work on descriptions of a target, but the word “specific” was added, presumably to reinforce that the description must be precise enough to uniquely pick out a person or group. They also stipulated that eavesdroppers must inform the FISA Court within ten days of any new facility they eavesdrop on, and explain the “facts justifying a belief that the target is using, or is about to use, that new facility or place.”
Better, to be sure; but without access to the classified opinions of the FISA Court, it’s quite difficult to know just what this means in practice. In criminal investigations, we have a reasonable idea of what the “proximity” standard for roving taps entails. Maybe a target checks into a hotel with a phone in the room, or a dealer is observed to walk up to a pay phone, or to buy a “burner.” It is much harder to guess how the “is using or is about to use” standard will be construed in light of FISA’s vastly broader presumption of sweeping up-front acquisition. Again, we know that the courts have been satisfied to place enormous weight on after-the-fact minimization of communications, and it seems inevitable that they will do so to an even greater extent when they only learn of a new tap ten days (or 60 days with good reason) after eavesdropping has commenced.
We also don’t know how much is built into that requirement that warrants name a “specific” target, and there’s a special problem here when surveillance roves across not only facilities but types of facility. Suppose, for instance, that a FISA warrant is issued for me, but investigators have somehow been unable to learn my identity. Among the data they have obtained for their description, however, are a photograph, a voiceprint from a recording of my phone conversation with a previous target, and the fact that I work at the Cato Institute. Now, this is surely sufficient to pick me out specifically for the purposes of a warrant initially meant for telephone or oral surveillance. The voiceprint can be used to pluck all and only my conversations from the calls on Cato’s lines. But a description sufficient to specify a unique target in that context may not be sufficient in the context of, say, Internet surveillance, as certain elements of the description become irrelevant, and the remaining threaten to cover a much larger pool of people. Alternatively, if someone has a very unusual regional dialect, that may be sufficiently specific to pinpoint their voice in one location or community using a looser matching algorithm (perhaps because there is no actual recording, or it is brief or of low quality), but insufficient if they travel to another location where many more people have similar accents.
Russ Feingold (D-WI) has proposed amending the roving wiretap language so as to require that a roving tap identify the target. In fact, it’s not clear that this quite does the trick either. First, just conceptually, I don’t know that a sufficiently precise description can be distinguished from an “identity.” There’s an old and convoluted debate in the philosophy of language about whether proper names refer directly to their objects or rather are “disguised definite descriptions,” such that “Julian Sanchez” means “the person who is habitually called that by his friends, works at Cato, annoys others by singing along to Smiths songs incessantly…” and so on. Whatever the right answer to that philosophical puzzle, clearly for the practical purposes at issue here, a name is just one more kind of description. And for roving taps, there’s the same kind of scope issue: Within Washington, DC, the name “Julian Sanchez” probably either picks me out uniquely or at least narrows the target pool down to a handful of people. In Spain or Latin America—or, more relevant for our purposes, in parts of the country with very large Hispanic communities—it’s a little like being “John Smith.”
This may all sound a bit fanciful. Surely sophisticated intelligence officers are not going to confuse Cato Research Fellow Julian Sanchez with, say, Duke University Multicultural Affairs Director Julian Sanchez? And of course, that is quite unlikely—I’ve picked an absurdly simplistic example for purposes of illustration. But there is quite a lot of evidence in the public record to suggest that intelligence investigations have taken advantage of new technologies to employ “targeting procedures” that do not fit our ordinary conception of how search warrants work. I mentioned voiceprint analysis above; keyword searches of both audio and text present another possibility.
We also know that individuals can often be uniquely identified by their pattern of social or communicative connections. For instance, researchers have found that they can take a completely anonymized “graph” of the social connections on a site like Facebook—basically giving everyone a name instead of a number, but preserving the pattern of who is friends with whom—and then use that graph to relink the numbers to names using the data of a differentbut overlapping social network like Flickr or Twitter. We know the same can be (and is) done with calling records—since in a sense your phone bill is a picture of another kind of social network. Using such methods of pattern analysis, investigators might determine when a new “burner” phone is being used by the same person they’d previously been targeting at another number, even if most or all of his contacts have alsoswitched phone numbers. Since, recall, the “person” who is the “target” of FISA surveillance may be a “group” or other “entity,” and since I don’t think Al Qaeda issues membership cards, the “description” of the target might consist of a pattern of connections thought to reliably distinguish those who are part of the group from those who merely have some casual link to another member.
This brings us to the final concern about roving surveillance under FISA. Criminal wiretaps are always eventually disclosed to their targets after the fact, and typically undertaken with a criminal trial in mind—a trial where defense lawyers will pore over the actions of investigators in search of any impropriety. FISA wiretaps are covert; the targets typically will never learn that they occurred. FISA judges and legislators may be informed, at least in a summary way, about what surveillance was undertaken and what targeting methods were used, but especially if those methods are of the technologically sophisticated type I alluded to above, they are likely to have little choice but to defer to investigators on questions of their accuracy and specificity. Even assuming total honesty by the investigators, judges may not think to question whether a method of pattern analysis that is precise and accurate when applied (say) within a single city or metro area will be as precise at the national level, or whether, given changing social behavior, a method that was precise last year will also be precise next year. Does it matter if an Internet service initially used by a few thousands—including, perhaps, surveillance targets—comes to be embraced by millions? Precisely because the surveillance is so secretive, it is incredibly hard to know which concerns are urgent and which are not really a problem, let alone how to think about addressing the ones that merit some legislative response.
I nevertheless intend to give it a shot in a broader paper on modern surveillance I’m working on, but for the moment I’ll just say: “It’s tricky.” What is absolutely essential to take away from this, though, is that these loose and lazy analogies to roving wiretaps in criminal investigations are utterly unhelpful in thinking about the specific problems of roving FISA surveillance. That investigators have long been using “these” powers under Title III is no answer at all to the questions that arise here. Legislators who invoke that fact as though it should soothe every civil libertarian brow are simply evading their responsibilities.
The Supreme Court’s decision today in Safford Unified School District #1 et al. v. Redding was a victory for privacy and decency. The Court held that a middle school violated the Fourth Amendment rights of a thirteen-year-old girl by strip searching her in a failed effort to find Ibuprofen pills and an over-the-counter painkiller.
The Cato Institute filed an amicus brief, joined by the Rutherford Institute and the Goldwater Institute, opposing such abuses of school officials’ authority. The search in this case should have ended with the student’s backpack and pockets; forcing a teenage girl to pull her bra and panties away from her body for visual inspection is an invasion of privacy that must be reserved for extreme cases. School officials should be authorized to conduct such a search only when they have credible evidence that the student is in possession of objects posing a danger to the school and that the student has hidden them in a place that only a strip search will uncover.
Today’s decision should not come as a surprise. School officials were not granted unlimited police power in the seminal student search case, New Jersey v. T.L.O. Justice Stevens explored the limits of school searches in his partial concurrence and partial dissent, specifically mentioning strip searches. “To the extent that deeply intrusive searches are ever reasonable outside the custodial context, it surely must only be to prevent imminent, and serious harm.”
The Fourth Amendment exists to preserve a balance between the individual’s reasonable expectation of privacy and the state’s need for order and security. Unnecessarily traumatizing students with invasive and humiliating breaches of personal privacy upsets this balance. Today’s decision restores reasonable limits to student searches and provides valuable guidance to school officials.
This work by Cato Institute is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.