Tag: Fourth Amendment

“VIPR” Stands for “Visible Intermodal Prevention and Response” …

… and it’s sinking its fangs into Americans’ civil liberties.

Here’s a story about a “VIPR” team performing a “sting” operation on innocent Americans at a bus terminal in Florida, searching their persons and bags and discovering their petty crimes.

It’s almost a certainty that whoever named this sub-unit of the Department of Homeland Security thought it was a clever way to convey machismo and give a sense of mission to members of VIPR teams. But it also illustrates how the 9/11 terrorist attacks have caused the United States to lose its grip and behave like a cornered snake rather than a strong, free country.

The natural illogic of VIPR stings is that terrorism can strike anywhere, so VIPR teams should search anywhere. It’s the undoing of the Fourth Amendment, and it’s unwarranted counterterrorism because it expends resources on things that won’t catch or deter terrorists. Indeed, VIPR “stings” may encourage terrorism because they show that terrorism successfully undermines the American way of life.

PATRIOT Powers: Roving Wiretaps

Last week, I wrote a piece for Reason in which I took a close look at the USA PATRIOT Act’s “lone wolf” provision—set to expire at the end of the year, though almost certain to be renewed—and argued that it should be allowed to lapse. Originally, I’d planned to survey the whole array of authorities that are either sunsetting or candidates for reform, but ultimately decided it made more sense to give a thorough treatment to one than trying to squeeze an inevitably shallow gloss on four or five complex areas of law into the same space. But the Internets are infinite, so I’ve decided I’d turn the Reason piece into Part I of a continuing series on PATRIOT powers.  In this edition: Section 206, roving wiretap authority.

The idea behind a roving wiretap should be familiar if you’ve ever watched The Wire, where dealers used disposable “burner” cell phones to evade police eavesdropping. A roving wiretap is used when a target is thought to be employing such measures to frustrate investigators, and allows the eavesdropper to quickly begin listening on whatever new phone line or Internet account his quarry may be using, without having to go back to a judge for a new warrant every time. Such authority has long existed for criminal investigations—that’s “Title III” wiretaps if you want to sound clever at cocktail parties—and pretty much everyone, including the staunchest civil liberties advocates, seems to agree that it also ought to be available for terror investigations under the Foreign Intelligence Surveillance Act. So what’s the problem here?

 

To understand the reasons for potential concern, we need to take a little detour into the differences between electronic surveillance warrants under Title III and FISA. The Fourth Amendment imposes two big requirements on criminal warrants: “probable cause” and “particularity”. That is, you need evidence that the surveillance you’re proposing has some connection to criminal activity, and you have to “particularly [describe] the place to be searched and the persons or things to be seized.” For an ordinary non-roving wiretap, that means you show a judge the “nexus” between evidence of a crime and a particular “place” (a phone line, an e-mail address, or a physical location you want to bug). You will often have a named target, but you don’t need one: If you have good evidence gang members are meeting in some location or routinely using a specific payphone to plan their crimes, you can get a warrant to bug it without necessarily knowing the names of the individuals who are going to show up. On the other hand, though, you do always need that criminal nexus: No bugging Tony Soprano’s AA meeting unless you have some reason to think he’s discussing his mob activity there. Since places and communications facilities may be used for both criminal and innocent persons, the officer monitoring the facility is only supposed to record what’s pertinent to the investigation.

When the tap goes roving, things obviously have to work a bit differently. For roving taps, the warrant shows a nexus between the suspected crime and an identified target. Then, as surveillance gets underway, the eavesdroppers can go up on a line once they’ve got a reasonable belief that the target is “proximate” to a location or communications facility. It stretches that “particularity” requirement a bit, to be sure, but the courts have thus far apparently considered it within bounds. It may help that they’re not used with great frequency: Eleven were issued last year, all to state-level investigators, for narcotics and racketeering investigations.

Surveillance law, however, is not plug-and-play. Importing a power from the Title III context into FISA is a little like dropping an unfamiliar organism into a new environment—the consequences are unpredictable, and may well be dramatic. The biggest relevant difference is that with FISA warrants, there’s always a “target”, and the “probable cause” showing is not of criminal activity, but of a connection between that target and a “foreign power,” which includes terror groups like Al Qaeda. However, for a variety of reasons, both regular and roving FISA warrants are allowed to provide only a description of the target, rather than the target’s identity. Perhaps just as important, FISA has a broader definition of the “person” to be specified as a “target” than Title III. For the purposes of criminal wiretaps, a “person” means any “individual, partnership, association, joint stock company, trust, or corporation.” The FISA definition of “person” includes all of those, but may also be any “group, entity, …or foreign power.” Some, then, worry that roving authority could be used to secure “John Doe” warrants that don’t specify a particular location, phone line, or Internet account—yet don’t sufficiently identify a particular target either. Congress took some steps to attempt to address such concerns when they reauthorized Section 206 back in 2005, and other legislators have proposed further changes—which I’ll get to in a minute. But we actually need to understand a few more things about the peculiarities of FISA wiretaps to see why the risk of overbroad collection is especially high here.

In part because courts have suggested that the constraints of the Fourth Amendment bind more loosely in the foreign intelligence context, FISA surveillance is generally far more sweeping in its acquisition of information. In 2004, the FBI gathered some 87 years worth of foreign language audio recordings alone pursuant to FISA warrants. As David Kris (now assistant attorney general for the Justice Department’s National Security Division) explains in his definitive text on the subject, a FISA warrant typically “permits aquisition of nearly all information from a monitored facility or a searched location.” (This may be somewhat more limited for roving taps; I’ll return to the point shortly.) As a rare public opinion from the FISA Court put it in 2002: “Virtually all information seized, whether by electronic surveillance or physical search, is minimized hours, days, or weeks after collection.” The way this is supposed to be squared with the Fourth Amendment rights of innocent Americans who may be swept up in such broad interception is via those “minimization” procedures, employed after the fact to filter out irrelevant information.

That puts a fairly serious burden on these minimization procedures, however, and it’s not clear that they well bear it. First, consider the standard applied. The FISA Court explains that “communications of or concerning United States persons that could not be foreign intelligence information or are not evidence of a crime… may not be logged or summarized” (emphasis added). This makes a certain amount of sense: FISA intercepts will often be in unfamiliar languages, foreign agents will often speak in coded language, and the significance of a particular statement may not be clear initially. But such a deferential standard does mean they’re retaining an awful lot of data. And indeed, it’s important to recognize that “minimization” does not mean “deletion,” as the Court’s reference to “logs” and “summaries” hints. Typically intercepts that are “minimized” simply aren’t logged for easy retrieval in a database. In the 80s, this may have been nearly as good for practical purposes as deletion; with the advent of powerful audio search algorithms capable of scanning many hours of recording quickly for particular words or voices, it may not make much difference. And we know that much more material than is officially “retained” remains available to agents. In the 2003 case U.S. v. Sattar, pursuant to FISA surveillance, “approximately 5,175 pertinent voice calls .. were not minimized.”  But when it came time for the discovery phase of a criminal trial against the FISA targets, the FBI “retrieved and disclosed to the defendants over 85,000 audio files … obtained through FISA surveillance.”

Cognizant of these concerns, Congress tried to add some safeguards in 2005 when they reauthorized the PATRIOT Act. FISA warrants are still permitted to work on descriptions of a target, but the word “specific” was added, presumably to reinforce that the description must be precise enough to uniquely pick out a person or group. They also stipulated that eavesdroppers must inform the FISA Court within ten days of any new facility they eavesdrop on, and explain the “facts justifying a belief that the target is using, or is about to use, that new facility or place.”

Better, to be sure; but without access to the classified opinions of the FISA Court, it’s quite difficult to know just what this means in practice. In criminal investigations, we have a reasonable idea of what the “proximity” standard for roving taps entails. Maybe a target checks into a hotel with a phone in the room, or a dealer is observed to walk up to a pay phone, or to buy a “burner.” It is much harder to guess how the “is using or is about to use” standard will be construed in light of FISA’s vastly broader presumption of sweeping up-front acquisition. Again, we know that the courts have been satisfied to place enormous weight on after-the-fact minimization of communications, and it seems inevitable that they will do so to an even greater extent when they only learn of a new tap ten days (or 60 days with good reason) after eavesdropping has commenced.

We also don’t know how much is built into that requirement that warrants name a “specific” target, and there’s a special problem here when surveillance roves across not only facilities but types of facility. Suppose, for instance, that a FISA warrant is issued for me, but investigators have somehow been unable to learn my identity. Among the data they have obtained for their description, however, are a photograph, a voiceprint from a recording of my phone conversation with a previous target, and the fact that I work at the Cato Institute. Now, this is surely sufficient to pick me out specifically for the purposes of a warrant initially meant for telephone or oral surveillance.  The voiceprint can be used to pluck all and only my conversations from the calls on Cato’s lines. But a description sufficient to specify a unique target in that context may not be sufficient in the context of, say, Internet surveillance, as certain elements of the description become irrelevant, and the remaining threaten to cover a much larger pool of people. Alternatively, if someone has a very unusual regional dialect, that may be sufficiently specific to pinpoint their voice in one location or community using a looser matching algorithm (perhaps because there is no actual recording, or it is brief or of low quality), but insufficient if they travel to another location where many more people have similar accents.

Russ Feingold (D-WI) has proposed amending the roving wiretap language so as to require that a roving tap identify the target. In fact, it’s not clear that this quite does the trick either. First, just conceptually, I don’t know that a sufficiently precise description can be distinguished from an “identity.” There’s an old and convoluted debate in the philosophy of language about whether proper names refer directly to their objects or rather are “disguised definite descriptions,” such that “Julian Sanchez” means “the person who is habitually called that by his friends, works at Cato, annoys others by singing along to Smiths songs incessantly…” and so on.  Whatever the right answer to that philosophical puzzle, clearly for the practical purposes at issue here, a name is just one more kind of description. And for roving taps, there’s the same kind of scope issue: Within Washington, DC, the name “Julian Sanchez” probably either picks me out uniquely or at least narrows the target pool down to a handful of people. In Spain or Latin America—or, more relevant for our purposes, in parts of the country with very large Hispanic communities—it’s a little like being “John Smith.”

This may all sound a bit fanciful. Surely sophisticated intelligence officers are not going to confuse Cato Research Fellow Julian Sanchez with, say, Duke University Multicultural Affairs Director Julian Sanchez? And of course, that is quite unlikely—I’ve picked an absurdly simplistic example for purposes of illustration. But there is quite a lot of evidence in the public record to suggest that intelligence investigations have taken advantage of new technologies to employ “targeting procedures” that do not fit our ordinary conception of how search warrants work. I mentioned voiceprint analysis above; keyword searches of both audio and text present another possibility.

We also know that individuals can often be uniquely identified by their pattern of social or communicative connections. For instance, researchers have found that they can take a completely anonymized “graph” of the social connections on a site like Facebook—basically giving everyone a name instead of a number, but preserving the pattern of who is friends with whom—and then use that graph to relink the numbers to names using the data of a differentbut overlapping social network like Flickr or Twitter. We know the same can be (and is) done with calling records—since in a sense your phone bill is a picture of another kind of social network. Using such methods of pattern analysis, investigators might determine when a new “burner” phone is being used by the same person they’d previously been targeting at another number, even if most or all of his contacts have alsoswitched phone numbers. Since, recall, the “person” who is the “target” of FISA surveillance may be a “group” or other “entity,” and since I don’t think Al Qaeda issues membership cards, the “description” of the target might consist of a pattern of connections thought to reliably distinguish those who are part of the group from those who merely have some casual link to another member.

This brings us to the final concern about roving surveillance under FISA. Criminal wiretaps are always eventually disclosed to their targets after the fact, and typically undertaken with a criminal trial in mind—a trial where defense lawyers will pore over the actions of investigators in search of any impropriety. FISA wiretaps are covert; the targets typically will never learn that they occurred. FISA judges and legislators may be informed, at least in a summary way, about what surveillance was undertaken and what targeting methods were used, but especially if those methods are of the technologically sophisticated type I alluded to above, they are likely to have little choice but to defer to investigators on questions of their accuracy and specificity. Even assuming total honesty by the investigators, judges may not think to question whether a method of pattern analysis that is precise and accurate when applied (say) within a single city or metro area will be as precise at the national level, or whether, given changing social behavior, a method that was precise last year will also be precise next year. Does it matter if an Internet service initially used by a few thousands—including, perhaps, surveillance targets—comes to be embraced by millions? Precisely because the surveillance is so secretive, it is incredibly hard to know which concerns are urgent and which are not really a problem, let alone how to think about addressing the ones that merit some legislative response.

I nevertheless intend to give it a shot in a broader paper on modern surveillance I’m working on, but for the moment I’ll just say: “It’s tricky.”  What is absolutely essential to take away from this, though, is that these loose and lazy analogies to roving wiretaps in criminal investigations are utterly unhelpful in thinking about the specific problems of roving FISA surveillance. That investigators have long been using “these” powers under Title III is no answer at all to the questions that arise here. Legislators who invoke that fact as though it should soothe every civil libertarian brow are simply evading their responsibilities.

What You Don’t Know Won’t Hurt You (Surveillance State Edition)

While there are many choice tidbits to relate from Tuesday’s hearings on PATRIOT Act reform at the House Judiciary Committee’s Subcommittee on the Constitution—not least the fellow who had to be wrestled from the room, literally kicking and screaming, after he tried to stand and interrupt with a complaint about alleged FBI violations of his civil rights—I’ll just relate a novel theory of the Fourth Amendment advanced by Rep. Steve King (R-Iowa).

The ACLU’s Mike German, a former FBI agent turned surveillance policy expert, was explaining that it’s hard to know whether expansive surveillance powers are being abused, they’re mostly used in secret and deployed via third-parties like financial institutions and telecoms, who have little incentive to raise much fuss or draw attention to their cooperation. King interrupted to suggest that if we weren’t hearing about constitutional challenges, then it was probably safe to assume there was no Fourth Amendment harm. German tried to reiterate that the people whose privacy interests were directly harmed typically would not know they had ever been targeted.

That, King declared, was precisely the point. Surveillance of which the subject never became aware, he said, could be compared to a “tree falling in the forest” when nobody’s around. In other words, if you aren’t ultimately prosecuted, and don’t even feel subjective distress as a result of the knowledge that your private records or communications have been pored over, then it’s presumably no harm, no  foul. If we take this line of thinking literally, sufficiently secret surveillance can never be unconstitutional, which would seem to make King a spiritual cousin of Richard “if the president does it, that means it’s not illegal” Nixon.

Picture Don Draper Stamping on a Human Face, Forever

Last week, a coalition of 10 privacy and consumer groups sent letters to Congress advocating legislation to regulate behavioral tracking and advertising, a phrase that actually describes a broad range of practices used by online marketers to monitor and profile Web users for the purpose of delivering targeted ads. While several friends at the Tech Liberation Front have already weighed in on the proposal in broad terms – in a nutshell: they don’t like it – I think it’s worth taking a look at some of the specific concerns raised and remedies proposed. Some of the former strike me as being more serious than the TLF folks allow, but many of the latter seem conspicuously ill-tailored to their ends.

First, while it’s certainly true that there are privacy advocates who seem incapable of grasping that not all rational people place an equally high premium on anonymity, it strikes me as unduly dismissive to suggest, as Berin Szoka does, that it’s inherently elitist or condescending to question whether most users are making informed choices about their privacy. If you’re a reasonably tech-savvy reader, you probably know something about conventional browser cookies, how they can be used by advertisers to create a trail of your travels across the Internet, and how you can limit this.  But how much do you know about Flash cookies? Did you know about the old CSS hack I can use to infer the contents of your browser history even without tracking cookies? And that’s without getting really tricksy. If you knew all those things, congratulations, you’re an enormous geek too – but normal people don’t.  And indeed, polls suggest that people generally hold a variety of false beliefs about common online commercial privacy practices.  Proof, you might say, that people just don’t care that much about privacy or they’d be attending more scrupulously to Web privacy policies – except this turns out to impose a significant economic cost in itself.

The truth is, if we were dealing with a frictionless Coaseian market of fully-informed users, regulation would not be necessary, but it would not be especially harmful either, because users who currently allow themselves to be tracked would all gladly opt in. In the real world, though, behavioral economics suggests that defaults matter quite a lot: Making informed privacy choices can be costly, and while an opt-out regime will probably yield tracking of some who would prefer not to be under conditions of full information and frictionless choice, an opt-in regime will likely prevent tracking of folks who don’t object to tracking. And preventing that tracking also has real social costs, as Berin and Adam Thierer have taken pains to point out. In particular, it merits emphasis that behavioral advertising is regarded by many as providing a viable business model for online journalism, where contextual advertising tends not to work very well: There aren’t a lot of obvious products to tie in to an important investigative story about municipal corruption. Either way, though, the outcome is shaped by the default rule about the level of monitoring users are presumed to consent to. So which set of defaults ought we to prefer?

Here’s why I still come down mostly on Adam and Berin’s side, and against many of the regulatory remedies proposed. At the risk of stating the obvious, users start with de facto control of their data. Slightly less obvious: While users will tend to have heterogeneous privacy preferences – that’s why setting defaults either way is tricky – individual users will often have fairly homogeneous preferences across many different sites. Now, it seems to be an implicit premise of the argument for regulation that the friction involved in making lots of individual site-by-site choices about privacy will yield oversharing. But the same logic cuts in both directions: Transactional friction can block efficient departures from a high-privacy default as well. Even a default that optimally reflects the median user’s preferences or reasonable expectations is going to flub it for the outliers. If the variance in preferences is substantial, and if different defaults entail different levels of transactional friction, nailing the default is going to be less important than choosing the rule that keeps friction lowest. Given that most people do most of their Web surfing on a relatively small number of machines, this makes the browser a much more attractive locus of control. In terms of a practical effect on privacy, the coalition members would probably achieve more by persuading Firefox to set their browser to reject third-party cookies out of the box than from any legislation they’re likely to get – and indeed, it would probably have a more devastating effect on the behavioral ad market. Less bluntly, browsers could include a startup option that asks users whether they want to import an exclusion list maintained by their favorite force for good.

On the model proposed by the coalition, individuals have to make affirmative decisions about what data collection to permit for each Web site or ad network at least once every three months, and maybe each time they clear their cookies. If you think almost everyone would, if fully informed, opt out of such collection, this might make sense. But if you take the social benefits of behavioral targeting seriously, this scheme seems likely to block a lot of efficient sharing. Browser-based controls can still be a bit much for the novice user to grapple with, but programmers seem to be getting better and better at making it more easy and automatic for users to set privacy-protective defaults. If the problem with the unregulated market is supposed to be excessive transaction costs, it seems strange to lock in a model that keeps those costs high even as browser developers are finding ways to streamline that process. It’s also worth considering whether such rules wouldn’t have the perverse consequence of encouraging consolidation across behavioral trackers. The higher the bar is set for consent to monitoring, the more that consent effectively becomes a network good, which may encourage concentration of data in a small number of large trackers – not, presumably, the result privacy advocates are looking for. Finally – and for me this may be the dispositive point – it’s worth remembering that while American law is constrained by national borders, the Internet is not. And it seems to me that there’s a very real danger of giving the least savvy users a false sense of security – the government is on the job guarding my privacy! no need to bother learning about cookies! – when they may routinely and unwittingly be interacting with sites beyond the reach of domestic regulations.

There are similar practical difficulties with the proposal that users be granted a right of access to behavioral tracking data about them.  Here’s the dilemma: Any requirement that trackers make such data available to users is a potential security breach, which increases the chances of sensitive data falling into the wrong hands. I may trust a site or ad network to store this information for the purpose of serving me ads and providing me with free services, but I certainly don’t want anyone who sends them an e-mail with my IP address to have access to it. The obvious solution is for them to have procedures for verifying the identity of each tracked user – but this would appear to require that they store still more information about me in order to render tracking data personally identifiable and verifiable. A few ways of managing the difficulty spring to mind, but most defer rather than resolve the problem, and add further points of potential breach.

That doesn’t mean there’s no place for government or policy change here, but it’s not always the one the coalition endorses. Let’s look  more closely at some of their specific concerns and see which, if any, are well-suited to policy remedies. Only one really has anything to do with behavioral advertising, and it’s easily the weakest of the bunch. The groups worry that targeted ads – for payday loans, sub-prime mortgages, or snake-oil remedies – could be used to “take advantage of vulnerable consumers.” It’s not clear that this is really a special problem with behavioral ads, however: Similar targeting could surely be accomplished by means of contextual ads, which are delivered via relevant sites, pages, or search terms rather than depending on the personal characteristics or browsing history of the viewer – yet the groups explicitly aver that no new regulation is appropriate for contextual advertising. In any event, since whatever problem exists here is a problem with ads, the appropriate remedy is to focus on deceptive or fraudulent ads, not the particular means of delivery. We already, quite properly, have rules covering dishonest advertising practices.

The same sort of reply works for some of the other concerns, which are all linked in some more specific way to the collection, dissemination, and non-advertising use of information about people and their Web browsing habits. The groups worry, for instance, about “redlining” – the restriction or denial of access to goods, services, loans, or jobs on the basis of traits linked to race, gender, sexual orientation, or some other suspect classification. But as Steve Jobs might say, we’ve got an app for that: It’s already illegal to turn down a loan application on the grounds that the applicant is African American. There’s no special exemption for the case where the applicant’s race was inferred from a Doubleclick profile. But this actually appears to be something of a redlining herring, so to speak: When you get down into the weeds, the actual proposal is to bar any use of data collected for “any credit, employment, insurance, or governmental purpose or for redlining.” This seems excessively broad; it should suffice to say that a targeter “cannot use or disclose information about an individual in a manner that is inconsistent with its published notice.”

Particular methods of tracking may also be covered by current law, and I find it unfortunate that the coalition letter lumps together so many different practices under the catch-all heading of “behavioral tracking.” Most behavioral tracking is either done directly by sites users interact with – as when Amazon uses records of my past purchases to recommend new products I might like – or by third party companies whose ads place browser cookies on user computers. Recently, though, some Internet Service Providers have drawn fire for proposals to use Deep Packet Inspection to provide information about their users’ behavior to advertising partners – proposals thus far scuppered by a combination of user backlash and congressional grumbling. There is at least a colorable argument to be made that this practice would already run afoul of the Electronic Communications Privacy Act, which places strict limits on the circumstances under which telecom providers may intercept or share information about the contents of user communications without explicit permission. ECPA is already seriously overdue for an update, and some clarification on this point would be welcome. If users do wish to consent to such monitoring, that should be their right, but it should not be by means of a blanket authorization in eight-point type on page 27 of a terms-of-service agreement.

Similarly welcome would be some clarification on the status of such behavioral profiles when the government comes calling. It’s an unfortunate legacy of some technologically atavistic Supreme Court rulings that we enjoy very little Fourth Amendment protection against government seizure of private records held by third parties – the dubious rationale being that we lose our “reasonable expectation of privacy” in information we’ve already disclosed to others outside a circle of intimates. While ECPA seeks to restore some protection of that data by statute, we’ve made it increasingly easy in recent years for the government to seek “business records” by administrative subpoena rather than court order. It should not be possible to circumvent ECPA’s protections by acquiring, for instance, records of keyword-sensitive ads served on a user’s Web-based e-mail.

All that said, some of the proposals offered up seem,while perhaps not urgent, less problematic. Requiring some prominent link to a plain-English description of how information is collected and used constitutes a minimal burden on trackers – responsible sites already maintain prominent links to privacy policies anyway – and serves the goal of empowering users to make more informed decisions. I’m also warily sympathetic to the idea of giving privacy policies more enforcement teeth – the wariness stemming from a fear of incentivizing frivolous litigation. Still, the status quo is that sites and ad networks profitably elicit information from users on the basis of stated privacy practices, but often aren’t directly liable to consumers if they flout those promises, unless the consumer can show that the breach of trust resulted in some kind of monetary loss.

Finally, a quick note about one element of the coalition recommendations that neither they nor their opponents seem to have discussed much – the insistence that there be no federal preemption of state privacy law. I assume what’s going on here is that the privacy advocates expect some states to be more protective of privacy than Congress or the FTC would be, and want to encourage that, while libertarians are more concerned with keeping the federal government from getting involved at all. But really, if there’s an issue that was made for federal preemption, this is it.  A country where vendors, advertisers, and consumers on a borderless Internet have to navigate 50 flavors of privacy rules to sell a banner add or an iTunes track does not sound particularly conducive to privacy, commerce, or informed consumer choice.

Victory for Decency at the Supreme Court

The Supreme Court’s decision today in Safford Unified School District #1 et al. v. Redding was a victory for privacy and decency. The Court held that a middle school violated the Fourth Amendment rights of a thirteen-year-old girl by strip searching her in a failed effort to find Ibuprofen pills and an over-the-counter painkiller.

The Cato Institute filed an amicus brief, joined by the Rutherford Institute and the Goldwater Institute, opposing such abuses of school officials’ authority. The search in this case should have ended with the student’s backpack and pockets; forcing a teenage girl to pull her bra and panties away from her body for visual inspection is an invasion of privacy that must be reserved for extreme cases. School officials should be authorized to conduct such a search only when they have credible evidence that the student is in possession of objects posing a danger to the school and that the student has hidden them in a place that only a strip search will uncover.

Today’s decision should not come as a surprise. School officials were not granted unlimited police power in the seminal student search case, New Jersey v. T.L.O. Justice Stevens explored the limits of school searches in his partial concurrence and partial dissent, specifically mentioning strip searches. “To the extent that deeply intrusive searches are ever reasonable outside the custodial context, it surely must only be to prevent imminent, and serious harm.”

The Fourth Amendment exists to preserve a balance between the individual’s reasonable expectation of privacy and the state’s need for order and security. Unnecessarily traumatizing students with invasive and humiliating breaches of personal privacy upsets this balance. Today’s decision restores reasonable limits to student searches and provides valuable guidance to school officials.

Schneier and Friends on Fixing Airport Security

Security guru Bruce Schneier comes down on the strictly pragmatic side in this essay called “Fixing Airport Security.” Because of terrorism fears, he says, TSA checkpoints are “here to stay.” The rules should be made more transparent. He also argues for an amendment to some constitutional doctrines:

The Constitution provides us, both Americans and visitors to America, with strong protections against invasive police searches. Two exceptions come into play at airport security checkpoints. The first is “implied consent,” which means that you cannot refuse to be searched; your consent is implied when you purchased your ticket. And the second is “plain view,” which means that if the TSA officer happens to see something unrelated to airport security while screening you, he is allowed to act on that. Both of these principles are well established and make sense, but it’s their combination that turns airport security checkpoints into police-state-like checkpoints.

The comments turn up an important recent Fourth Amendment decision circumscribing TSA searches. In a case called United States v. Fofana, the district court for the southern district of Ohio held that a search of passenger bags going beyond what was necessary to detect articles dangerous to air transportation violated the Fourth Amendment. “[T]he need for heightened security does not render every conceivable checkpoint search procedure constitutionally reasonable,” wrote the court.

Application of this rule throughout the country would not end the “police-state-like checkpoint,” but at least rummaging of our things for non-air-travel-security would be restrained.

I prefer principle over pragmatism and would get rid of TSA.