Today Politico Arena asks:
Terror suspects: Eric Holder’s defense (nothing new here)–agree or disagree?
Today Politico Arena asks:
Terror suspects: Eric Holder’s defense (nothing new here)–agree or disagree?
Foreign Policy magazine performs an important public service, publishing a compendium of the “top 10 worst predictions for 2009.” My favorite?
“If we do nothing, I can guarantee you that within a decade, a communist Chinese regime that hates democracy and sees America as its primary enemy will dominate the tiny country of Panama, and thus dominate the Panama Canal, one of the world’s most important strategic points.”
—Rep. Dana Rohrabacher (R-Calif.), Dec. 7, 1999
Rohrabacher made this alarming prediction during a debate on the U.S. handover of the Panama Canal. His fellow hawk, retired Adm. Thomas Moorer, even warned that China could sneak missiles into Panama and use the country as a staging ground for an attack on the United States. Well, Rohrabacher’s decade ran out this December, and all remains quiet on the Panamanian front. As for China, the United States is now its largest trading partner.
The point here isn’t to poke fun at Rohrabacher, or any of the other predictors featured on the FP list. Rather, it’s to point out that predicting the future is really hard. And as Ben Friedman and I have harped on, you just can’t aspire to any predictive competence without sound theory to guide you. In order to judge that if we do (or don’t do) X, Y will happen, you need a theory connecting X to Y. So looking back at our predictions, and comparing them to the results of our policies, is a useful way to test the theories on which we based our policies in the first place.
Putting falsifiable predictions out there is a collective action problem, though: If I start offering nothing but precise point-predictions about what will or won’t happen if we start a war with Iran, or how big the defense budget will get, or anything else, I’m going to get a lot of things wrong. And if everyone else keeps offering vapid, non-falsifiable rhetoric, I stand to look like a real jackass while everyone can hide behind the fog of common-use language. As I wrote in the National Interest a while back:
Foreign-policy analysts have an incredibly difficult task: to make predictions about the future based on particular policy choices in Washington. These difficulties extend into the world of intelligence, as well. The CIA issues reports with impossibly ambitious titles like “Mapping the Global Future”, as if anyone could actually do that. The father of American strategic analysis, Sherman Kent, grappled with these difficulties in his days at OSS and CIA. When Kent finally grew tired of the vapid language used for making predictions, such as “good chance of”, “real likelihood that” and the like, he ordered his analysts to start putting odds on their assessments. When a colleague complained that Kent was “turning us into the biggest bookie shop in town”, Kent replied that he’d “rather be a bookie than a [expletive] poet.”
Actually, though, it’s worse than this. As I wrote in the American Conservative, there’s basically no endogenous mechanism to hold irresponsible predictors accountable:
In 1992, the Los Angeles Times ran an article outlining the dynamics of the “predictions” segment of the popular “McLaughlin Group” TV program. Michael Kinsley, who had been a panelist on the program, admitted
“When I was doing the show, I was much more interested in coming up with an interesting prediction than in coming up with one that was true. There’s no penalty for being wrong, but there is a penalty for being boring. …Prognosticators have known for centuries that people only remember what you got right. They don’t remember what you got wrong.”
Foreign-policy analysis works in much the same way. Errant predictions are quickly forgotten. It is the interesting predictions that the media want, and unfortunately interesting predictions in the context of foreign policy often mean predictions of unprovoked foreign attacks, geopolitical chaos, and a long queue of bogeymen waiting to threaten us. (By contrast, after a given policy is enacted, its proponents have to spin it in a positive light, as in Iraq.) Meanwhile, it is the person with the quickest wit and the pithiest one-liner–not the deepest understanding–who winds up with the responsibility of informing the American electorate about foreign-policy decisions.
So it’s very good to see that Foreign Policy has interest in holding everyone’s feet to the fire. John Mueller does a similar service in The Atomic Obsession, pointing out the many predictions of doom, apocalypse and general disaster that have characterized both the hawkish establishment and the leftish arms-control clique.
If this sort of exercise becomes common, though, watch for foreign-policy commentators not to develop a growing sense of modesty about their predictive power, but rather to take greater care in avoiding falsifiable statements altogether.
The invaluable Chris Soghoian has posted some illuminating—and sobering—information on the scope of surveillance being carried out with the assistance of telecommunications providers. The entire panel discussion from this year’s ISS World surveillance conference is well worth listening to in full, but surely the most striking item is a direct quotation from Sprint’s head of electronic surveillance:
[M]y major concern is the volume of requests. We have a lot of things that are automated but that’s just scratching the surface. One of the things, like with our GPS tool. We turned it on the web interface for law enforcement about one year ago last month, and we just passed 8 million requests. So there is no way on earth my team could have handled 8 million requests from law enforcement, just for GPS alone. So the tool has just really caught on fire with law enforcement. They also love that it is extremely inexpensive to operate and easy, so, just the sheer volume of requests they anticipate us automating other features, and I just don’t know how we’ll handle the millions and millions of requests that are going to come in.
To be clear, that doesn’t mean they are giving law enforcement geolocation data on 8 million people. He’s talking about the wonderful automated backend Sprint runs for law enforcement, LSite, which allows investigators to rapidly retrieve information directly, without the burden of having to get a human being to respond to every specific request for data. Rather, says Sprint, each of those 8 million requests represents a time when an FBI computer or agent pulled up a target’s location data using their portal or API. (I don’t think you can Tweet subpoenas yet.) For an investigation whose targets are under ongoing realtime surveillance over a period of weeks or months, that could very well add up to hundreds or thousands of requests for a few individuals. So those 8 million data requests, according to a Sprint representative in the comments, actually “only” represent “several thousand” discrete cases.
As Kevin Bankston argues, that’s not entirely comforting. The Justice Department, Soghoian points out, is badly delinquent in reporting on its use of pen/trap orders, which are generally used to track communications routing information like phone numbers and IP addresses, but are likely to be increasingly used for location tracking. And recent changes in the law may have made it easier for intelligence agencies to turn cell phones into tracking devices. In the criminal context, the legal process for getting geolocation information depends on a variety of things—different districts have come up with different standards, and it matters whether investigators want historical records about a subject or ongoing access to location info in real time. Some courts have ruled that a full-blown warrant is required in some circumstances, in other cases a “hybrid” order consisting of a pen/trap order and a 2703(d) order. But a passage from an Inspector General’s report suggests that the 2005 PATRIOT reauthorization may have made it easier to obtain location data:
After passage of the Reauthorization Act on March 9, 2006, combination orders became unnecessary for subscriber information and [REDACTED PHRASE]. Section 128 of the Reauthorization Act amended the FISA statute to authorize subscriber information to be provided in response to a pen register/trap and trace order. Therefore, combination orders for subscriber information were no longer necessary. In addition, OIPR determined that substantive amendments to the statute undermined the legal basis for which OIPR had received authorization [REDACTED PHRASE] from the FISA Court. Therefore, OIPR decided not to request [REDACTED PHRASE] pursuant to Section 215 until it re-briefed the issue for the FISA Court. As a result, in 2006 combination orders were submitted to the FISA Court only from January 1, 2006, through March 8, 2006.
The new statutory language permits FISA pen/traps to get more information than is allowed under a traditional criminal pen/trap, with a lower standard of review, including “any temporarily assigned network address or associated routing or transmission information.” Bear in mind that it would have made sense to rely on a 215 order only if the information sought was more extensive than what could be obtained using a National Security Letter, which requires no judicial approval. That makes it quite likely that it’s become legally easier to transform a cell phone into a tracking device even as providers are making it point-and-click simple to log into their servers and submit automated location queries. So it’s become much more urgent that the Justice Department start living up to its obligation to start telling us how often they’re using these souped-up pen/traps, and how many people are affected. In congressional debates, pen/trap orders are invariably mischaracterized as minimally intrusive, providing little more than the list of times and phone numbers they produced 30 years ago. If they’re turning into a plug-and-play solution for lojacking the population, Americans ought to know about it.
If you’re interested enough in this stuff to have made it through that discussion, incidentally, come check out our debate at Cato this afternoon, either in the flesh or via webcast. There will be a simultaneous “tweetchat” hosted by the folks at Get FISA Right.
I’ve been poring over the trove of documents the Electronic Frontier Foundation has obtained detailing the long process by which the FISA Amendments Act—which substantially expanded executive power to conduct sweeping surveillance with little oversight—was hammered out between Hill staffers and lawyers at the Department of Justice and intelligence agencies. The really interesting stuff, of course, is mostly redacted, and I’m only partway though the stacks, but there are a few interesting tidbits so far.
As Wired has already reported, one e-mail shows Bush officials feared that if the attorney general was given too much discretion over retroactive immunity for telecoms that aided in warrantless wiretapping, the next administration might refuse to provide it.
A couple other things stuck out for me. First, while it’s possible they’ve been released before and simply not crossed my desk, there are a series of position papers — so rife with underlining that they look like some breathless magazine subscription pitch — circulated to Congress explaining the Bush administration’s opposition to various proposed amendments to the FAA. Among these was a proposal by Sen. Russ Feingold (D-WI) that would have barred “bulk collection” of international traffic and required that the broad new intelligence authorizations specify (though not necessarily by name) individual targets. The idea here was that if there were particular suspected terrorists (for instance) being monitored overseas, it would be fine to keep monitoring their communications if they began talking with Americans without pausing to get a full-blown warrant — but you didn’t want to give NSA carte blanche to just indiscriminately sweep in traffic between the U.S. and anyone abroad. The position paper included in these documents is more explicit than the others that I’ve seen about the motive for objecting to the bulk collection amendment. Which was, predictably, that they wanted to do bulk collection:
- It also would prevent the intelligence community from conducting the types of intelligence collection necessary to track terrorits and develop new targets.
- For example, this amendment could prevent the intelligence community from targeting a particular group of buildings or a geographic area abroad to collect foreign intelligence prior to operations by our armed forces.
So to be clear: Contra the rhetoric we heard at the time, the concern was not simply that NSA would be able to keep monitoring a suspected terrorist when he began calling up Americans. It was to permit the “targeting” of entire regions, scooping all communications between the United States and the chosen area.
One other exchange at least raises an eyebrow. If you were following the battle in Congress at the time, you may recall that there was a period when the stopgap Protect America Act had expired — though surveillance authorized pursuant to the law could continue for many months — and before Congress approved the FAA. A week into that period, on February 22, 2008, the attorney general and director of national intelligence sent a letter warning Congress that they were now losing intelligence because providers were refusing to comply with new requests under existing PAA authorizations. A day later, they had to roll that back, and some of the correspondence from the EFF FOIA record makes clear that there was an issue with a single recalcitrant provider who decided to go along shortly after the letter was sent.
But there’s another wrinkle. A week prior to this, just before the PAA was set to expire, Jeremy Bash, the chief counsel for the House Permanent Select Committee on Intelligence, sent an email to “Ken and Ben,” about a recent press conference call. It’s clear from context that he’s writing to Assistant Attorney General Kenneth Wainstein and General Counsel for the Director of National Intelligence Ben Powell about this press call, where both men fairly clearly suggest that telecoms are balking for fear that they’ll no longer be immune from liability for participation in PAA surveillance after the statute lapses. Bash wants to confirm whether they really said that “private sector entities have refused to comply with PAA certifications because they were concerned that the law was temporary.” In particular, he wants to know whether this is actually true, because “the briefs I read provided a very different rationale.” In other words, Bash — who we know was cleared for the most sensitive information about NSA surveillance — was aware of some service providers being reluctant to comply with “new taskings” under the law, but not because of the looming expiration of the statute. One of his correspondents — whether Wainstein or Powell is unclear — shoots back denying having said any such thing (read the transcript yourself) and concluding with a terse:
Not addressing what is in fact the situation on both those issues (compliance and threat to halt) on this email.
In other words, the actual compliance issues they were encountering would have to be discussed over a more secure channel. If the issue wasn’t the expiration, though, what would the issue have been? The obvious alternative possibility is that NSA (or another agency) was attempting to get them to carry out surveillance that they thought might fall outside the scope of either the PAA or a particular authorization. Given how sweeping these were, that should certainly give us pause. It should also raise some questions as to whether, even before that one holdout fell into compliance, the warning letter from the AG and the DNI was misleading. Was there really ever a “gap” resulting from the statute’s sunset, or was it a matter of telecoms balking at an attempt by the intelligence community to stretch the bounds of their legal authority? The latter would certainly fit a pattern we saw again and again under the Bush administration: break the law, inducing a legal crisis, then threaten bloody mayhem if the unlawful program is forced to abruptly halt — at which point a nervous Congress grants its blessing.
This is a reminder, citizen: Only cranks worry about vastly increased governmental power to gather transactional data about Americans’ online behavior. Why, just last week, Rep. Lamar Smith (R-TX) informed us that there has not been any “demonstrated or recent abuse” of such authority by means of National Security Letters, which permit the FBI to obtain many telecommunications records without court order. I mean, the last Inspector General report finding widespread and systemic abuse of those came out, like, over a year ago! And as defenders of expanded NSL powers often remind us, similar records can often be obtained by grand jury subpoena.
It instructed [System administrator Kristina] Clair to “include IP addresses, times, and any other identifying information,” including e-mail addresses, physical addresses, registered accounts, and Indymedia readers’ Social Security Numbers, bank account numbers, credit card numbers, and so on.
The sweeping request came with a gag order prohibiting Clair from talking about it. (As a constitutional matter, courts have found that recipients of such orders must at least be allowed to discuss them with attorneys in order to seek advise about their legality, but the subpoena contained no notice of that fact.) Justice Department officials tell McCullagh that the request was never reviewed directly by the Attorney General, as is normally required when information is sought from a press organization. Clair did tell attorneys at the Electronic Frontier Foundation, and when they wrote to U.S. Attorney Timothy Morrison questioning the propriety of the request, it was promptly withdrawn. EFF’s Kevin Bankston explains the legal problems with the subpoena at length.
Perhaps ironically, the targeting of Indymedia, which is about as far left as news sites get, may finally hep the populist right to the perils of the burgeoning surveillance state. It seems to have piqued Glenn Beck’s interest, and McCullagh went on Lou Dobbs’ show to talk about the story. Thus far, the approved conservative position appears to have been that Barack Obama is some kind of ruthless Stalinist with a secret plan to turn the United States into a massive gulag—but under no circumstances should there be any additional checks on his administration’s domestic spying powers. This always struck me as both incoherent and a tragic waste of paranoia. Now that we’ve had a rather public reminder that such powers can be used to compile databases of people with politically unorthodox browsing habits, perhaps Beck—who seems to be something of an amateur historian—will take some time to delve into the story of COINTELPRO and other related projects our intelligence community busied itself with before we established an architecture of surveillance oversight in the late ’70s.
You know, the one we’ve spent the past eight years dismantling.
According to CBS News, President Barack Obama will send most, if not all, of the 40,000 additional troops that General Stanley McChrystal requested and reportedly plans to keep those troops in Afghanistan for the long-term.
If the CBS report turns out to be true—the White House has backed away, and other news outlets are leaving the story alone for the moment—the president’s decision is disappointing, but expected. Last month, the administration ruled out the notion of a near-term U.S. exit from Afghanistan, arguing that the Taliban and al Qaeda would perceive an early pullout as a victory over the United States. But if avoiding a perception of weakness is the rationale that the administration is operating under then we have already lost by allowing our enemies to dictate the terms of the war.
Gen. McChrystal’s ambitious strategy hopes to integrate U.S. troops into the Afghan population. These additional troops might reduce violence in the short- to medium-term. But this strategy rests on the presumption that Afghans in heavily contested areas want the protection of foreign troops. The reality might be very different; western forces might instead be perceived as a magnet for violence.
McChrystal’s strategy also presumes that an additional 40,000 troops will be enough. But proponents of an ambitious counterinsurgency strategy need to come clean on the total bill that would be required. For a country the size of Afghanistan, with roughly 31 million people, the Army and Marine Corps counterinsurgency doctrine advises between 620,000 to 775,000 counterinsurgents—whether native or foreign. Furthermore, typical counterinsurgency missions require such concentrations of forces for a decade or more. Given these realities, we could soon hear cries of “surge,” “if only,” and “not enough.”
Even if the United States and its allies committed themselves to decades of armed nation building, success against al Qaeda would hardly be guaranteed. After all, in the unlikely event that we forged a stable Afghanistan, al Qaeda would simply reposition its presence into other regions of the world.
It is well past time for the United States to adapt means to ends. The choice for President Obama is not between counterterrorism or counterinsurgency; but between counterterrorism and counterterrorism combined with counterinsurgency. Protecting the United States from terrorism does not require U.S. troops to police Afghan villages. Where terrorists do appear, we hardly need to tinker with their communal identities. We can target our enemies with allies on the ground or, if that fails, by relying on timely intelligence for use in targeted airstrikes or small-unit raids.
President Obama’s decision on Afghanistan could define his presidency. If an escalating military strategy leads only to thousands of more deaths, and at a cost of tens or hundreds of billions of dollars, then that is a bitter legacy indeed.
Last night I spoke at “The Little Idea,” a mini-lecture series launched in New York by Ari Melber of The Nation and now starting up here in D.C., on the incredibly civilized premise that, instead of some interminable panel that culminates in a series of audience monologues-disguised-as-questions, it’s much more appealing to have a speaker give a ten-minute spiel, sort of as a prompt for discussion, and then chat with the crowd over drinks.
I’d sketched out a rather longer version of my remarks in advance just to make sure I had my main ideas clear, and so I’ll post them here, as a sort of preview of a rather longer and more formal paper on 21st century surveillance and privacy that I’m working on. Since ten-minute talks don’t accommodate footnotes very well, I should note that I’m drawing for a lot of these ideas on the excellent work of legal scholars Lawrence Lessig and Daniel Solove (relevant papers at the links). Anyway, the expanded version of my talk after the jump:
Since this is supposed to be an event where the drinking is at least as important as the talking, I want to begin with a story about booze—the story of a guy named Roy Olmstead. Back in the days of Prohibition, Roy Olmstead was the youngest lieutenant on the Seattle police force. He spent a lot of his time busting liquor bootleggers, and in the course of his duties, he had two epiphanies. First, the local rum runners were disorganized—they needed a smart kingpin who’d run the operation like a business. Second, and more importantly, he realized liquor smuggling paid a lot better than police work.
So Roy Olmstead decided to change careers, and it turned out he was a natural. Within a few years he had remarried to a British debutante, bought a big white mansion, and even ran his own radio station—which he used to signal his ships, smuggling hooch down from Canada, via coded messages hidden in broadcasts of children’s bedtime stories. He did retain enough of his old ethos, though, that he forbade his men from carrying guns. The local press called him the Bootleg King of Puget Sound, and his parties were the hottest ticket in town.
Roy’s success did not go unnoticed, of course, and soon enough the feds were after him using their own clever high-tech method: wiretapping. It was so new that they didn’t think they needed to get a court warrant to listen in on phone conversations, and so when the hammer came down, Roy Olmstead challenged those wiretaps in a case that went all the way to the Supreme Court—Olmstead v. U.S.
The court had to decide whether these warrantless wiretaps had violated the Fourth Amendment “right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures.” But when the court looked at how a “search” had traditionally been defined, they saw that it was tied to the common law tort of trespass. Originally, that was supposed to be your remedy if you thought your rights had been violated, and a warrant was a kind of shield against a trespass lawsuit. So the majority didn’t see any problem: “There was no search,” they wrote, “there was no seizure.” Because a search was when the cops came on to your property, and a seizure was when they took your stuff. This was no more a search than if the police had walked by on the sidewalk and seen Roy unpacking a crate of whiskey through his living room window: It was just another kind of non-invasive observation.
So Olmstead went to jail, and came out a dedicated evangelist for Christian Science. It wasn’t until the year after Olmstead died, in 1967, that the Court finally changed its mind in a case called Katz v. U.S.: No, they said, the Fourth Amendment protects people and not places, and so instead of looking at property we’re going to look at your reasonable expectation of privacy, and on that understanding, wiretaps are a problem after all.
So that’s a little history lesson—great, so what? Well, we’re having our own debate about surveillance as Congress considers not just reauthorization of some expiring Patriot Act powers, but also reform of the larger post-9/11 surveillance state, including last year’s incredibly broad amendments to the Foreign Intelligence Surveillance Act. And I see legislators and pundits repeating two related types of mistakes—and these are really conceptual mistakes, not legal mistakes—that we can now, with the benefit of hindsight, more easily recognize in the logic of Olmstead: One is a mistake about technology; the other is a mistake about the value of privacy.
First, the technology mistake. The property rule they used in Olmstead was founded on an assumption about the technological constraints on observation. The goal of the Fourth Amendment was to preserve a certain kind of balance between individual autonomy and state power. The mechanism for achieving that goal was a rule that established a particular trigger or tripwire that would, in a sense, activate the courts when that boundary was crossed in order to maintain the balance. Establishing trespass as the trigger made sense when the sphere of intimate communication was coextensive with the boundaries of your private property. But when technology decoupled those two things, keeping the rule the same no longer preserved the balance, the underlying goal, in the same way, because suddenly you could gather information that once required trespass without hitting that property tripwire.
The second and less obvious error has to do with a conception of the value of privacy, and a corresponding idea of what a privacy harm looks like. You could call the Olmstead court’s theory “Privacy as Seclusion,” where the paradigmatic violation is the jackboot busting down your door and disturbing the peace of your home. Wiretapping didn’t look like that, and so in one sense it was less intrusive—invisible, even. In another sense, it was more intrusive because it was invisible: Police could listen to your private conversations for months at a time, with you none the wiser. The Katz court finally understood this; you could call their theory Privacy as Secrecy, where the harm is not intrusion but disclosure.
But there’s an even less obvious potential harm here. If they didn’t need a warrant, everyone who made a phone call would know that they could whenever they felt like it. Wiretapping is expensive and labor intensive enough that realistically they can only be gathering information about a few people at a time. But if further technological change were to remove that constraint, then the knowledge of the permanent possibility of surveillance starts having subtle effects on people’s behavior—if you’ve seen the movie The Lives of Others you can see an extreme case of an ecology of constant suspicion—and that persists whether or not you’re actually under surveillance. To put it in terms familiar to Washingtonians: Imagine if your conversations had to be “on the record” all the time. Borrowing from Michel Foucault, we can say the privacy harm here is not (primarily) invasion or disclosure but discipline. This idea is even embedded in our language: When we say we want to control and discipline these police powers, we talk about the need for over-sight and super-vision, which are etymologically basically the same word as sur-veillance.
Move one more level from the individual and concrete to the abstract and social harms, and you’ve got the problem (or at least the mixed blessing) of what I’ll call legibility. The idea here is that the longer term possibilities of state control—the kinds of power that are even conceivable—are determined in the modern world by the kind and quantity of information the modern state has, not about discrete individuals, but about populations. So again, to reach back a few decades, the idea that maybe it would be convenient to round up all the Americans of Japanese ancestry—or some other group—and put them in internment camps is just not even on the conceptual menu unless you have a preexisting informational capacity to rapidly filter and locate your population that way.
Now, when we talk about our First Amendment right to free speech, we understand it has a certain dual character: That there’s an individual right grounded in the equal dignity of free citizens that’s violated whenever I’m prohibited from expressing my views. But also a common or collective good that is an important structural precondition of democracy. As a citizen subject to democratic laws, I have a vested interest in the freedom of political discourse whether or not I personally want to say–or even listen to–controversial speech. Looking at the incredible scope of documented intelligence abuses from the 60s and 70s, we can add that I have an interest in knowing whether government officials are trying to silence or intimidate inconvenient journalists, activists, or even legislators. Censorship and arrest are blunt tactics I can see and protest; blackmail or a calculated leak that brings public disgrace are not so obvious. As legal scholar Bill Stuntz has argued, the Founders understood the structural value of the Fourth Amendment as a complement to the First, because it is very hard to make it a crime to pray the wrong way or to discuss radical politics if the police can’t arbitrarily see what people are doing or writing in their homes.
Now consider how we think about our own contemporary innovations in search technology. The marketing copy claims PATRIOT and its offspring “update” investigative powers for the information age—but what we’re trying to do is stretch our traditional rules and oversight mechanisms to accommodate search tools as radically novel now as wiretapping was in the 20s. On the traditional model, you want information about a target’s communications and conduct, so you ask a judge to approve a method of surveillance, using standards that depend on how intrusive the method is and how secret and sensitive the information is. Constrained by legal rulings from a very different technological environment, this model assumes that information held by third parties—like your phone or banking or credit card information—gets very little protection, since it’s not really “secret” anymore. And the sensitivity of all that information is evaluated in isolation, not in terms of the story that might emerge from linking together all the traces we now inevitable leave in the datasphere every day.
The new surveillance typically seeks to observe information about conduct and communications in order to identify targets. That may mean using voiceprint analysis to pull matches for a particular target’s voice or a sufficiently unusual regional dialect in a certain area. It may mean content analysis to flag e-mails or voice conversations containing known terrorist code phrases. It may mean social graph analysis to reidentify targets who have changed venues by their calling patterns. If you’re on Facebook, and a you and bunch of your friends all decide to use fake names when you sign up for Twitter, I can still reidentify you given sufficient computing power and strong algorithms by mapping the shape of the connections between you—a kind of social fingerprinting. It can involve predictive analysis based on powerful electronic “classifiers” that extract subtle patterns of travel or communication or purchases common to past terrorists in order to write their own algorithms for detecting potential ones.
Bracket for the moment whether we think some or all of these methods are wise. It should be crystal clear that a method of oversight designed for up front review and authorization of target-based surveillance is going to be totally inadequate as a safeguard for these new methods. It will either forbid them completely or be absent from the parts of the process where the dangers to privacy exist. In practice what we’ve done is shift the burden of privacy protection to so-called “minimization” procedures that are meant to archive or at least anonymize data about innocent people. But those procedures have themselves been rendered obsolete by technologies of retrieval and reidentification: No sufficiently large data set is truly anonymous.
And realize the size of the data sets we’re talking about. The FBI’s Information Data Warehouse holds at least 1.5 billion records, and growing fast, from an array of private and government sector sources—some presumably obtained using National Security Letters and Patriot 215 orders, some by other means. Those NSLs are issued by the tens of thousands each year, mostly for information about Americans. As of 2006, we know “some intelligence sources”—probably NSA’s—were growing at a rate of 4 petabytes, that’s 4 million Gigabytes—each month. Within about five years, NSA’s archive is expected to be measured in Yottabytes—if you want to picture one Yottabyte, take the sum total of all data on the Internet—every web page, audio file, and video—and multiply it by 2,000. At that point they will have to make up a new word for the next largest unit of data. As J. Edgar Hoover understood all too well, just having that information is a form of power. He wasn’t the most feared man in Washington for decades because he necessarily had something on everyone—though he had a lot—but because he had so much that you really couldn’t be sure what he had on you.
There is, to be sure, a lot to be said against the expansion of surveillance powers over the past eight years from a more conventional civil liberties perspective. But we also need to be aware that if we’re not attuned to the way new technologies may avoid our would tripwires, if we only think of privacy in terms of certain familiar, paradigmatic violations—the boot in the door—then like the Olmstead court, we may render ourselves blind to equally serious threats that don’t fit our mental picture of a privacy harm.
If we’re going to avoid this, we need to attune ourselves to the ways modern surveillance is qualitatively different from past search tools, even if words like “wiretap” and “subpoena” remain the same. And we’re going to need to stop thinking only in terms of isolated violations of individual rights, but also consider the systemic and structural effects of the architectures of surveillance we’re constructing.
This work by Cato Institute is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.