Tag: wiretap

Revise the Maryland Wiretap Law?

As I said in this piece in the Baltimore Sun, Maryland police officers are misusing that state’s wiretap law to deter anyone who would film them performing their duties. Maryland officers have asserted that any audio recording of a conversation, even in a public place, is a violation of the state’s wiretapping law and a felony punishable by five years in prison and a $10,000 fine. Officers made this claim to deter filming of an arrest at the Preakness, and when motorcyclist Anthony Graber videotaped his traffic stop.

As Radley Balko points out, the officers’ reading of the law is out of step with the language of the statute itself and Maryland rulings interpreting the scope of the law. Is it time for a revision of this law, or is it just the officers’ interpretation that is the problem? I discussed this on the Kojo Nnamdi Show with the prosecutor pressing charges against Anthony Graber, State’s Attorney Joseph Cassilly, and Graber’s lawyer, David Rocah of the Maryland ACLU.

If you ask some officers in Maryland, any recording of a conversation violates the wiretap statute. If you ask a judge, you will get an entirely different reading of the law. Even though Maryland’s wiretapping statute is considered a “unanimous consent” or “two-party consent” law, its language is different from other states put in the same category such as Massachusetts and Illinois. Where Massachusetts and Illinois have no protection for recordings of conversations outside of electronic means of communication, the first section of the Maryland wiretapping law restricts unlawful interceptions of “oral communications” to words spoken in a “private conversation.”

While the analysis for wire communications is made without regard to privacy, Maryland courts held in Fearnow v. C & P Telephone Co. that a “private conversation” is one where there is a “reasonable expectation of privacy.” Fourth Amendment jurisprudence provides plenty of guidance on where a “reasonable expectation of privacy” exists. Simply put, a traffic stop on an interstate is not a place where Anthony Graber or the officers who cited him have a reasonable expectation of privacy.

This conclusion is bolstered by the guidance given to the Montgomery County Police by the Maryland Attorney General in this 2000 advisory opinion on recording traffic stops. Since 1991, the wiretapping statute had an exemption for police dash cameras where officers could record interactions with motorists when they warned the citizen that the traffic stop would be recorded. The 2000 letter addresses the possibility that other people could show up after the receipt of consent from a motorist and potential “inadvertent interceptions.” The opinion concludes that there is little for officers to worry about, but the state legislature expanded the law enforcement exception in 2002 to address this concern anyway. In a footnote, the advisory opinion makes the point that, in any case, the motorists being pulled over have no reasonable expectation of privacy:

It is also notable that many encounters between uniformed police officers and citizens could hardly be characterized as “private conversations.” For example, any driver pulled over by a uniformed officer in a traffic stop is acutely aware that his or her statements are being made to a police officer and, indeed, that they may be repeated as evidence in a courtroom. It is difficult to characterize such a conversation as “private.”

The Attorney General’s office provided further guidance on the issue in this letter to a state legislator in 2009, advising that surreptitious recording of a meeting of the Democratic Club would probably not be a violation of the Maryland wiretapping law because statements made in this setting lack a “reasonable expectation of privacy.”

So, under the interpretation of the law supporting Anthony Graber’s prosecution, dash camera footage of Anthony Graber’s traffic stop is not a violation of the law, but Graber’s helmet-mounted footage is. The law enforcement officer, a public official performing public duties, retains a “reasonable expectation of privacy” on the side of I-95, but Anthony Graber has none. This is an assertion made contrary to the interpretation of the courts of Maryland, the Maryland Attorney General, and common sense.

This injustice could be resolved in several ways. First, as Radley suggests, the Maryland Attorney General could issue an opinion clarifying the wiretapping law with regards to recording police activity. Advisory opinions are not generally given sua sponte, so a state legislator or other official would have to request the AG’s interpretation. Second, Anthony Graber’s case may provide a rebuttal to an expansive reading of the statute by Maryland law enforcement officers. Third, the legislature could step in to deter future abuse of the statute by expressly stating that public discussions are not “private conversations.”

I discussed this on the Kojo Nnamdi Show with David Rocah and Joseph Cassilly. Rocah wants to preserve the “two-party consent” statute. The legislature, in fact, can clarify the  definition of “private conversations” without changing the consent requirement of the law with regard to electronic communications.

On the other hand, State’s Attorney Joseph Cassilly recalled occasions when citizens have come to his office with recordings of threats or extortion demands and he was required to tell them that under Maryland law (1) their recording was not admissible as evidence because it did not have the consent of the threatening or extorting party (though I see no reason that a letter with the same communication would be inadmissible); and (2) the victim of the threat or extortion committed a felony violation of the wiretapping law by making the recording in the first place. That may be the law, but it’s not justice.

In any case, the prosecution of Anthony Graber is an abuse of police power. If Maryland law enforcement officers continue to use the state’s wiretapping law to shield their activities from public view, the backlash may result in a revision of the law in its entirety.

A Response to Intel Abuses at Last?

As I explain in yesterday’s BloggingHeads dialogue with Eli Lake, I’m chary of relying too much on legislative “sunset” provisions to check abuse of power, especially in the shadowy world of intelligence. (For the fleshed-out version of the argument, see Chris Mooney’s 2004 piece in Legal Affairs.) After all, in January, the Office of the Inspector General had released an absolutely damning report showing that for years, FBI agents systematically manipulated their incredibly broad National Security Letter authorities to get information about Americans telephone usage without following any legitimate legal process at all. To cover those abuses, officials compounded their crimes by lying to federal courts and refusing to use an auditable computer system for their information requests.  The report was released amid debate over what reforms should be included in the reauthorization of several controversial Patriot Act provisions, with proposed changes to the NSL statutes front and center—not least because several courts had found constitutional problems with the gag orders accompanying NSLs. Yet just a month later, Congress consented to an extension of those Patriot provisions without implementing any of the various rather mild changes that had won approval in the House or Senate Judiciary Committees. If a sunset-inspired review didn’t yield any real consequences then, I thought, what would it take?

Today, however, I see a there are glimmers of interest in something more closely resembling serious oversight. In a letter to Attorney General Eric Holder, sent last month but released yesterday, Senate Judiciary Committee Chair Patrick Leahy (D-VT) urges DOJ to implement many of the reforms in the SJC’s bill voluntarily—above all procedures to guarantee a detailed record of the grounds on which various types of information sought, and to govern the retention, use, and distribution of information obtained. Leahy also signals his intent to ask department watchdogs to conduct audits of the use of Patriot authorities, as the Senate’s bill had stipulated. These are all, needless to say, good ideas—provided we don’t accept voluntary and mutable internal guidelines as a substitute for statutory limits with teeth.

Meanwhile, Rep. Jerry Nadler (D-NY) is holding Wednesday morning hearings on the abuses detailed in the Inspector General’s report. FBI General Counsel Valerie Caproni and IG Glenn Fine are slated to testify. (There are links to their prepared testimony already, though the documents themselves aren’t there yet as I write.) Extrapolating from past performances, I predict Caproni will allow that the abuses described were Very Serious Indeed (though, really, perhaps not quite as serious as all that…) but all cleaned up now. Nobody should be satisfied with this, and if Fine doesn’t broach the subject himself, somebody really ought to ask Caproni about some minimization procedures for the 25,000–50,000 National Security Letters the department issues annually. As Fine noted in recent testimony, the Bureau has been promising this for years now:

In August 2007, the NSL Working Group sent the Attorney General its report and proposed minimization procedures. However, we had several concerns with the findings and recommendations of the Working Group’s report, which we discussed in our March 2008 NSL report. In particular, we disagreed with the Working Group about the sufficiency of existing privacy safeguards and measures for minimizing the retention of NSL-derived information. We disagreed because the controls the Working Group cited as providing safeguards predated our NSL reviews, yet we found serious abuses of the NSL authorities.

As a result, the Acting Privacy Officer decided to reconsider the recommendations and withdrew them. The Working Group has subsequently developed new recommendations for NSL minimization procedures, which are still being considered within the Department and have not yet been issued. We believe that the Department should promptly consider the Working Group’s proposal and issue final minimization procedures for NSLs that address the collection of information through NSLs, how the FBI can upload NSL information in FBI databases, the dissemination of NSL information, the appropriate tagging and tracking of NSL derived information in FBI databases and files, and the time period for retention of NSL obtained information. At this point, more than 2 years have elapsed since after our first report was issued, and final guidance is needed and overdue.

Way, way overdue—much like some kind of serious congressional response to the Bureau’s NSL Calvinball.

State Secrets, Courts, and NSA’s Illegal Wiretapping

As Tim Lynch notes, Judge Vaughn Walker has ruled in favor of the now-defunct Al-Haramain Islamic Foundation—unique among the many litigants who have tried to challenge the Bush-era program of warrantless wiretapping by the National Security Agency because they actually had evidence, in the form of a document accidentally delivered to foundation lawyers by the government itself, that their personnel had been targeted for eavesdropping.

Other efforts to get a court to review the program’s legality had been caught in a kind of catch-22: Plaintiffs who merely feared that their calls might be subject to NSA filtering and interception lacked standing to sue, because they couldn’t show a specific, concrete injury resulting from the program.

But, of course, information about exactly who has been wiretapped is a closely guarded state secret. So closely guarded, in fact, that the Justice Department was able to force the return of the document that exposed the wiretapping of Al-Haramain, and then get it barred from the court’s consideration as a “secret” even after it had been disclosed. (Contrast, incidentally, the Supreme Court’s jurisprudence on individual privacy rights, which often denies any legitimate expectation of privacy in information once revealed to a third party.) Al-Haramain finally prevailed because they were ultimately able to assemble evidence from the public record showing they’d been wiretapped, and the government declined to produce anything resembling a warrant for that surveillance.

If you read over the actual opinion, however it may seem a little anticlimactic—as though something is missing. The ruling concludes that there’s prima facie evidence that Al-Haramain and their lawyers were wiretapped, that the government has failed to produce a warrant, and that this violates the Foreign Intelligence Surveillance Act. But of course, there was never any question about that. Not even the most strident apologists for the NSA program denied that it contravened FISA; rather, they offered a series of rationalizations for why the president was entitled to disregard a federal statute.

There was the John Yoo argument that the president essentially becomes omnipotent during wartime, and that if we can shoot Taliban on a foreign battlefield, surely we can wiretap Americans at home if they seem vaguely Taliban-ish. Even under Bush, the Office of Legal Counsel soon backed away from such… creative… lines of argument. Instead, they relied on the post-9/11 Authorization for the Use of Military Force (AUMF) against al-Qaeda, claiming it had implicitly created a loophole in the FISA law. It was David Kris, now head of DOJ’s National Security Division, who most decisively blew that one out of the water, concluding that it was “essentially impossible” to sustain the government’s reading of the AUMF.

Yet you’ll note that none of these issues arise in Walker’s opinion, because the DOJ, in effect, refused to play. They resisted the court at every step, insisting that a program discussed at length on the front pages of newspapers for years now was so very secret that no aspect of it could be discussed even in a closed setting. They continued to insist on this in the face of repeated court rulings to the contrary. So while Al-Haramain has prevailed, there’s no ruling on the validity of any of those arguments. That’s why I think Marcy Wheeler is probably correct when she predicts that the government will simply take its lumps and pay damages rather than risk an appeal. For one, while Obama administration has been happy to invoke state secrecy as vigorously as its predecessor, it would obviously be somewhat embarrassing for Obama’s DOJ to parrot Bush’s substantive claims of near-limitless executive power. Perhaps more to the point, though, some of those legal arguments may still be operative in secret OLC memos. The FISA Amendments Act aimed to put the unlawful Bush program under court supervision, and even reasserted FISA’s language establishing it as the “exclusive means” for electronic surveillance, which would seem to drive a final stake in the heart of any argument based on the AUMF. But we ultimately don’t know what legal rationales they still consider operative, and it would surely be awkward to have an appellate court knock the legs out from under some of these secret memoranda.

None of this is to deny that the ruling is a big deal—if nothing else because it suggests that the government does not enjoy total carte blanche to shield lawbreaking from review with broad, bald assertions of privilege. But I also know that civil libertarians had hoped that the courts might be the only path to a more full accounting of—and accountability for—the domestic spying program. If the upshot of this is simply that the government must pay a few tens, or even hundreds of thousands of dollars in damages, it’s hard not to see the victory as something of a disappointment.

Three Keys to Surveillance Success: Location, Location, Location

The invaluable Chris Soghoian has posted some illuminating—and sobering—information on the scope of surveillance being carried out with the assistance of telecommunications providers.  The entire panel discussion from this year’s ISS World surveillance conference is well worth listening to in full, but surely the most striking item is a direct quotation from Sprint’s head of electronic surveillance:

[M]y major concern is the volume of requests. We have a lot of things that are automated but that’s just scratching the surface. One of the things, like with our GPS tool. We turned it on the web interface for law enforcement about one year ago last month, and we just passed 8 million requests. So there is no way on earth my team could have handled 8 million requests from law enforcement, just for GPS alone. So the tool has just really caught on fire with law enforcement. They also love that it is extremely inexpensive to operate and easy, so, just the sheer volume of requests they anticipate us automating other features, and I just don’t know how we’ll handle the millions and millions of requests that are going to come in.

To be clear, that doesn’t mean they are giving law enforcement geolocation data on 8 million people. He’s talking about the wonderful automated backend Sprint runs for law enforcement, LSite, which allows investigators to rapidly retrieve information directly, without the burden of having to get a human being to respond to every specific request for data.  Rather, says Sprint, each of those 8 million requests represents a time when an FBI computer or agent pulled up a target’s location data using their portal or API. (I don’t think you can Tweet subpoenas yet.)  For an investigation whose targets are under ongoing realtime surveillance over a period of weeks or months, that could very well add up to hundreds or thousands of requests for a few individuals. So those 8 million data requests, according to a Sprint representative in the comments, actually “only” represent “several thousand” discrete cases.

As Kevin Bankston argues, that’s not entirely comforting. The Justice Department, Soghoian points out, is badly delinquent in reporting on its use of pen/trap orders, which are generally used to track communications routing information like phone numbers and IP addresses, but are likely to be increasingly used for location tracking. And recent changes in the law may have made it easier for intelligence agencies to turn cell phones into tracking devices.  In the criminal context, the legal process for getting geolocation information depends on a variety of things—different districts have come up with different standards, and it matters whether investigators want historical records about a subject or ongoing access to location info in real time. Some courts have ruled that a full-blown warrant is required in some circumstances, in other cases a “hybrid” order consisting of a pen/trap order and a 2703(d) order. But a passage from an Inspector General’s report suggests that the 2005 PATRIOT reauthorization may have made it easier to obtain location data:

After passage of the Reauthorization Act on March 9, 2006, combination orders became unnecessary for subscriber information and [REDACTED PHRASE]. Section 128 of the Reauthorization Act amended the FISA statute to authorize subscriber information to be provided in response to a pen register/trap and trace order. Therefore, combination orders for subscriber information were no longer necessary. In addition, OIPR determined that substantive amendments to the statute undermined the legal basis for which OIPR had received authorization [REDACTED PHRASE] from the FISA Court. Therefore, OIPR decided not to request [REDACTED PHRASE] pursuant to Section 215 until it re-briefed the issue for the FISA Court. As a result, in 2006 combination orders were submitted to the FISA Court only from January 1, 2006, through March 8, 2006.

The new statutory language permits FISA pen/traps to get more information than is allowed under a traditional criminal pen/trap, with a lower standard of review, including “any temporarily assigned network address or associated routing or transmission information.” Bear in mind that it would have made sense to rely on a 215 order only if the information sought was more extensive than what could be obtained using a National Security Letter, which requires no judicial approval. That makes it quite likely that it’s become legally easier to transform a cell phone into a tracking device even as providers are making it point-and-click simple to log into their servers and submit automated location queries.  So it’s become much more  urgent that the Justice Department start living up to its obligation to start telling us how often they’re using these souped-up pen/traps, and how many people are affected.  In congressional debates, pen/trap orders are invariably mischaracterized as minimally intrusive, providing little more than the list of times and phone numbers they produced 30 years ago.  If they’re turning into a plug-and-play solution for lojacking the population, Americans ought to know about it.

If you’re interested enough in this stuff to have made it through that discussion, incidentally, come check out our debate at Cato this afternoon, either in the flesh or via webcast. There will be a simultaneous “tweetchat” hosted by the folks at Get FISA Right.

The FISA Amendments: Behind the Scenes

I’ve been poring over the trove of documents the Electronic Frontier Foundation has obtained detailing the long process by which the FISA Amendments Act—which substantially expanded executive power to conduct sweeping surveillance with little oversight—was hammered out between Hill staffers and lawyers at the Department of Justice and intelligence agencies. The really interesting stuff, of course, is mostly redacted, and I’m only partway though the stacks, but there are a few interesting tidbits so far.

As Wired has already reported, one e-mail shows Bush officials feared that if the attorney general was given too much discretion over retroactive immunity for telecoms that aided in warrantless wiretapping, the next administration might refuse to provide it.

A couple other things stuck out for me. First, while it’s possible they’ve been released before and simply not crossed my desk, there are a series of position papers — so rife with  underlining that they look like some breathless magazine subscription pitch — circulated to Congress explaining the Bush administration’s opposition to various proposed amendments to the FAA. Among these was a proposal by Sen. Russ Feingold (D-WI) that would have barred “bulk collection” of international traffic and required that the broad new intelligence authorizations specify (though not necessarily by name) individual targets. The idea here was that if there were particular suspected terrorists (for instance) being monitored overseas, it would be fine to keep monitoring their communications if they began talking with Americans without pausing to get a full-blown warrant — but you didn’t want to give NSA carte blanche to just indiscriminately sweep in traffic between the U.S. and anyone abroad. The position paper included in these documents is more explicit than the others that I’ve seen about the motive for objecting to the bulk collection amendment. Which was, predictably, that they wanted to do bulk collection:

  • It also would prevent the intelligence community from conducting the types of intelligence collection necessary to track terrorits and develop new targets.
  • For example, this amendment could prevent the intelligence community from targeting a particular group of buildings or a geographic area abroad to collect foreign intelligence prior to operations by our armed forces.

So to be clear: Contra the rhetoric we heard at the time, the concern was not simply that NSA would be able to keep monitoring a suspected terrorist when he began calling up Americans. It was to permit the “targeting” of entire regions, scooping all communications between the United States and the chosen area.

One other exchange at least raises an eyebrow.  If you were following the battle in Congress at the time, you may recall that there was a period when the stopgap Protect America Act had expired — though surveillance authorized pursuant to the law could continue for many months — and before Congress approved the FAA. A week into that period, on February 22, 2008, the attorney general and director of national intelligence sent a letter warning Congress that they were now losing intelligence because providers were refusing to comply with new requests under existing PAA authorizations. A day later, they had to roll that back, and some of the correspondence from the EFF FOIA record makes clear that there was an issue with a single recalcitrant provider who decided to go along shortly after the letter was sent.

But there’s another wrinkle. A week prior to this, just before the PAA was set to expire, Jeremy Bash, the chief counsel for the House Permanent Select Committee on Intelligence, sent an email to “Ken and Ben,” about a recent press conference call. It’s clear from context that he’s writing to Assistant Attorney General Kenneth Wainstein and General Counsel for the Director of National Intelligence Ben Powell about this press call, where both men fairly clearly suggest that telecoms are balking for fear that they’ll no longer be immune from liability for participation in PAA surveillance after the statute lapses. Bash wants to confirm whether they really said that “private sector entities have refused to comply with PAA certifications because they were concerned that the law was temporary.” In particular, he wants to know whether this is actually true, because “the briefs I read provided a very different rationale.”  In other words, Bash — who we know was cleared for the most sensitive information about NSA surveillance — was aware of some service providers being reluctant to comply with “new taskings” under the law, but not because of the looming expiration of the statute. One of his correspondents — whether Wainstein or Powell is unclear — shoots back denying having said any such thing (read the transcript yourself) and concluding with a terse:

Not addressing what is in fact the situation on both those issues (compliance and threat to halt) on this email.

In other words, the actual compliance issues they were encountering would have to be discussed over a more secure channel. If the issue wasn’t the expiration, though, what would the issue have been? The obvious alternative possibility is that NSA (or another agency) was attempting to get them to carry out surveillance that they thought might fall outside the scope of either the PAA or a particular authorization. Given how sweeping these were, that should certainly give us pause. It should also raise some questions as to whether, even before that one holdout fell into compliance, the warning letter from the AG and the DNI was misleading. Was there really ever a “gap” resulting from the statute’s sunset, or was it a matter of telecoms balking at an attempt by the intelligence community to stretch the bounds of their legal authority? The latter would certainly fit a pattern we saw again and again under the Bush administration: break the law, inducing a legal crisis, then threaten bloody mayhem if the unlawful program is forced to abruptly halt — at which point a nervous Congress grants its blessing.

Some Thoughts on the New Surveillance

Last night I spoke at “The Little Idea,” a mini-lecture series launched in New York by Ari Melber of The Nation and now starting up here in D.C., on the incredibly civilized premise that, instead of some interminable panel that culminates in a series of audience monologues-disguised-as-questions, it’s much more appealing to have a speaker give a ten-minute spiel, sort of as a prompt for discussion, and then chat with the crowd over drinks.

I’d sketched out a rather longer version of my remarks in advance just to make sure I had my main ideas clear, and so I’ll post them here, as a sort of preview of a rather longer and more formal paper on 21st century surveillance and privacy that I’m working on. Since ten-minute talks don’t accommodate footnotes very well, I should note that I’m drawing for a lot of these ideas on the excellent work of legal scholars Lawrence Lessig and Daniel Solove (relevant papers at the links). Anyway, the expanded version of my talk after the jump:

Since this is supposed to be an event where the drinking is at least as important as the talking, I want to begin with a story about booze—the story of a guy named Roy Olmstead.  Back in the days of Prohibition, Roy Olmstead was the youngest lieutenant on the Seattle police force. He spent a lot of his time busting liquor bootleggers, and in the course of his duties, he had two epiphanies. First, the local rum runners were disorganized—they needed a smart kingpin who’d run the operation like a business. Second, and more importantly, he realized liquor smuggling paid a lot better than police work.

So Roy Olmstead decided to change careers, and it turned out he was a natural. Within a few years he had remarried to a British debutante, bought a big white mansion, and even ran his own radio station—which he used to signal his ships, smuggling hooch down from Canada, via coded messages hidden in broadcasts of children’s bedtime stories. He did retain enough of his old ethos, though, that he forbade his men from carrying guns. The local press called him the Bootleg King of Puget Sound, and his parties were the hottest ticket in town.

Roy’s success did not go unnoticed, of course, and soon enough the feds were after him using their own clever high-tech method: wiretapping. It was so new that they didn’t think they needed to get a court warrant to listen in on phone conversations, and so when the hammer came down, Roy Olmstead challenged those wiretaps in a case that went all the way to the Supreme Court—Olmstead v. U.S.

The court had to decide whether these warrantless wiretaps had violated the Fourth Amendment “right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures.” But when the court looked at how a “search” had traditionally been defined, they saw that it was tied to the common law tort of trespass. Originally, that was supposed to be your remedy if you thought your rights had been violated, and a warrant was a kind of shield against a trespass lawsuit. So the majority didn’t see any problem: “There was no search,” they wrote, “there was no seizure.” Because a search was when the cops came on to your property, and a seizure was when they took your stuff. This was no more a search than if the police had walked by on the sidewalk and seen Roy unpacking a crate of whiskey through his living room window: It was just another kind of non-invasive observation.

So Olmstead went to jail, and came out a dedicated evangelist for Christian Science. It wasn’t until the year after Olmstead died, in 1967, that the Court finally changed its mind in a case called Katz v. U.S.: No, they said, the Fourth Amendment protects people and not places, and so instead of looking at property we’re going to look at your reasonable expectation of privacy, and on that understanding, wiretaps are a problem after all.

So that’s a little history lesson—great, so what? Well, we’re having our own debate about surveillance as Congress considers not just reauthorization of some expiring Patriot Act powers, but also reform of the larger post-9/11 surveillance state, including last year’s incredibly broad amendments to the Foreign Intelligence Surveillance Act. And I see legislators and pundits repeating two related types of mistakes—and these are really conceptual mistakes, not legal mistakes—that we can now, with the benefit of hindsight, more easily recognize in the logic of Olmstead: One is a mistake about technology; the other is a mistake about the value of privacy.

First, the technology mistake. The property rule they used in Olmstead was founded on an assumption about the technological constraints on observation. The goal of the Fourth Amendment was to preserve a certain kind of balance between individual autonomy and state power. The mechanism for achieving that goal was a rule that established a particular trigger or tripwire that would, in a sense, activate the courts when that boundary was crossed in order to maintain the balance. Establishing trespass as the trigger made sense when the sphere of intimate communication was coextensive with the boundaries of your private property. But when technology decoupled those two things, keeping the rule the same no longer preserved the balance, the underlying goal, in the same way, because suddenly you could gather information that once required trespass without hitting that property tripwire.

The second and less obvious error has to do with a conception of the value of privacy, and a corresponding idea of what a privacy harm looks like.  You could call the Olmstead court’s theory “Privacy as Seclusion,” where the paradigmatic violation is the jackboot busting down your door and disturbing the peace of your home. Wiretapping didn’t look like that, and so in one sense it was less intrusive—invisible, even. In another sense, it was more intrusive because it was invisible: Police could listen to your private conversations for months at a time, with you none the wiser. The Katz court finally understood this; you could call their theory Privacy as Secrecy, where the harm is not intrusion but disclosure.

But there’s an even less obvious potential harm here. If they didn’t need a warrant, everyone who made a phone call would know that they could whenever they felt like it. Wiretapping is expensive and labor intensive enough that realistically they can only be gathering information about a few people at a time.  But if further technological change were to remove that constraint, then the knowledge of the permanent possibility of surveillance starts having subtle effects on people’s behavior—if you’ve seen the movie The Lives of Others you can see an extreme case of an ecology of constant suspicion—and that persists whether or not you’re actually under surveillance.  To put it in terms familiar to Washingtonians: Imagine if your conversations had to be “on the record” all the time. Borrowing from Michel Foucault, we can say the privacy harm here is not (primarily) invasion or disclosure but discipline. This idea is even embedded in our language: When we say we want to control and discipline these police powers, we talk about the need for over-sight and super-vision, which are etymologically basically the same word as sur-veillance.

Move one more level from the individual and concrete to the abstract and social harms, and you’ve got the problem (or at least the mixed blessing) of what I’ll call legibility. The idea here is that the longer term possibilities of state control—the kinds of power that are even conceivable—are determined in the modern world by the kind and quantity of information the modern state has, not about discrete individuals, but about populations.  So again, to reach back a few decades, the idea that maybe it would be convenient to round up all the Americans of Japanese ancestry—or some other group—and put them in internment camps is just not even on the conceptual menu unless you have a preexisting informational capacity to rapidly filter and locate your population that way.

Now, when we talk about our First Amendment right to free speech, we understand it has a certain dual character: That there’s an individual right grounded in the equal dignity of free citizens that’s violated whenever I’m prohibited from expressing my views. But also a common or collective good that is an important structural precondition of democracy. As a citizen subject to democratic laws, I have a vested interest in the freedom of political discourse whether or not I personally want to say–or even listen to–controversial speech. Looking at the incredible scope of documented intelligence abuses from the 60s and 70s, we can add that I have an interest in knowing whether government officials are trying to silence or intimidate inconvenient journalists, activists, or even legislators. Censorship and arrest are blunt tactics I can see and protest; blackmail or a calculated leak that brings public disgrace are not so obvious. As legal scholar Bill Stuntz has argued, the Founders understood the structural value of the Fourth Amendment as a complement to the First, because it is very hard to make it a crime to pray the wrong way or to discuss radical politics if the police can’t arbitrarily see what people are doing or writing in their homes.

Now consider how we think about our own contemporary innovations in search technology. The marketing copy claims PATRIOT and its offspring “update” investigative powers for the information age—but what we’re trying to do is stretch our traditional rules and oversight mechanisms to accommodate search tools as radically novel now as wiretapping was in the 20s. On the traditional model, you want information about a target’s communications and conduct, so you ask a judge to approve a method of surveillance, using standards that depend on how intrusive the method is and how secret and sensitive the information is. Constrained by legal rulings from a very different technological environment, this model assumes that information held by third parties—like your phone or banking or credit card information—gets very little protection, since it’s not really “secret” anymore. And the sensitivity of all that information is evaluated in isolation, not in terms of the story that might emerge from linking together all the traces we now inevitable leave in the datasphere every day.

The new surveillance typically seeks to observe information about conduct and communications in order to identify targets. That may mean using voiceprint analysis to pull matches for a particular target’s voice or a sufficiently unusual regional dialect in a certain area. It may mean content analysis to flag e-mails or voice conversations containing known terrorist code phrases. It may mean social graph analysis to reidentify targets who have changed venues by their calling patterns.  If you’re on Facebook, and a you and bunch of your friends all decide to use fake names when you sign up for Twitter, I can still reidentify you given sufficient computing power and strong algorithms by mapping the shape of the connections between you—a kind of social fingerprinting. It can involve predictive analysis based on powerful electronic “classifiers” that extract subtle patterns of travel or communication or purchases common to past terrorists in order to write their own algorithms for detecting potential ones.

Bracket for the moment whether we think some or all of these methods are wise.  It should be crystal clear that a method of oversight designed for up front review and authorization of target-based surveillance is going to be totally inadequate as a safeguard for these new methods.  It will either forbid them completely or be absent from the parts of the process where the dangers to privacy exist. In practice what we’ve done is shift the burden of privacy protection to so-called “minimization” procedures that are meant to archive or at least anonymize data about innocent people. But those procedures have themselves been rendered obsolete by technologies of retrieval and reidentification: No sufficiently large data set is truly anonymous.

And realize the size of the data sets we’re talking about. The FBI’s Information Data Warehouse holds at least 1.5 billion records, and growing fast, from an array of private and government sector sources—some presumably obtained using National Security Letters and Patriot 215 orders, some by other means. Those NSLs are issued by the tens of thousands each year, mostly for information about Americans.  As of 2006, we know “some intelligence sources”—probably NSA’s—were  growing at a rate of 4 petabytes, that’s 4 million Gigabytes—each month.  Within about five years, NSA’s archive is expected to be measured in Yottabytes—if you want to picture one Yottabyte, take the sum total of all data on the Internet—every web page, audio file, and video—and multiply it by 2,000. At that point they will have to make up a new word for the next largest unit of data.  As J. Edgar Hoover understood all too well, just having that information is a form of power. He wasn’t the most feared man in Washington for decades because he necessarily had something on everyone—though he had a lot—but because he had so much that you really couldn’t be sure what he had on you.

There is, to be sure, a lot to be said against the expansion of surveillance powers over the past eight years from a more conventional civil liberties perspective.  But we also need to be aware that if we’re not attuned to the way new technologies may avoid our would tripwires, if we only think of privacy in terms of certain familiar, paradigmatic violations—the boot in the door—then like the Olmstead court, we may render ourselves blind to equally serious threats that don’t fit our mental picture of a privacy harm.

If we’re going to avoid this, we need to attune ourselves to the ways modern surveillance is qualitatively different from past search tools, even if words like “wiretap” and “subpoena” remain the same. And we’re going to need to stop thinking only in terms of isolated violations of individual rights, but also consider the systemic and structural effects of the architectures of surveillance we’re constructing.

Topics: