Tag: NSA

Crime and Punishment in the Intel Community

On Thursday, the government indicted former National Security Agency executive Thomas Drake for obstructing justice and mishandling classified documents—though the underlying crime, for which Drake was not actually charged, was leaking embarrassing information to national security reporter Siobhan Gorman (then of the Baltimore Sun, now at The Wall Street Journal). As Glenn Greenwald observes, the decision to move forward with a rare leak prosecution in Drake’s case stands in rather sharp contrast to the decision to look the other way when it comes to other sorts of wrongdoing in the world of intelligence.

For years, the NSA managed a sweeping program of warrantless wiretaps and large-scale data mining, which a federal judge recently confirmed was in gross violation of the Foreign Intelligence Surveillance Act. The telecoms who participated in the scheme were, equally clearly, violating the Electronic Communications Privacy Act. The FBI separately and systematically flouted the same law by obtaining call records for thousands of phone numbers without any legitimate legal process. And, of course, there’s the little matter of torture. For these crimes, the administration has pronounced a verdict of “boys will be boys,” on the grounds that it’s better to gaze boldly into our shining future than get bogged down in recriminations over all that old stuff.

Drake didn’t spy on the conversations of Americans without a court order, or subject detainees to simulated drowning or sleep deprivation. Far worse, apparently, he embarrassed the NSA. The first article for which he acted as a source, “Computer ills hinder the NSA,”detailed how the agency had squandered billions on faulty computer systems that were getting in the way of effective intelligence work:

One [system] is Cryptologic Mission Management, a computer software program with an estimated cost of $300 million that was designed to help the NSA track the implementation of new projects but is so flawed that the agency is trying to pull the plug. The other, code-named Groundbreaker, is a multibillion-dollar computer systems upgrade that frequently gets its wires crossed.

The downfall of the Cryptologic Mission Management program has not previously been disclosed. While Congress raised concerns about the agency’s management of Groundbreaker in a 2003 report, the extent and impact of its inadequacies have not been discussed publicly.

To be sure, Drake broke the law—just as Daniel Ellsberg did when he leaked the Pentagon Papers. But it’s hard to say how the law here was working to protect national security, as opposed to the agency’s image. In any event, the contrast between the reaction to Drake and the non-reaction to other forms of lawbreaking makes the standard in effect for Bush-era misdeeds clear: If you illegally gathered information on members of the public, Obama’s DOJ would rather let sleeping dogs lie. If you illegally tried to get information to the public, you’d better lawyer up.  From Main Justice to Fort Meade, message received.

State Secrets, Courts, and NSA’s Illegal Wiretapping

As Tim Lynch notes, Judge Vaughn Walker has ruled in favor of the now-defunct Al-Haramain Islamic Foundation—unique among the many litigants who have tried to challenge the Bush-era program of warrantless wiretapping by the National Security Agency because they actually had evidence, in the form of a document accidentally delivered to foundation lawyers by the government itself, that their personnel had been targeted for eavesdropping.

Other efforts to get a court to review the program’s legality had been caught in a kind of catch-22: Plaintiffs who merely feared that their calls might be subject to NSA filtering and interception lacked standing to sue, because they couldn’t show a specific, concrete injury resulting from the program.

But, of course, information about exactly who has been wiretapped is a closely guarded state secret. So closely guarded, in fact, that the Justice Department was able to force the return of the document that exposed the wiretapping of Al-Haramain, and then get it barred from the court’s consideration as a “secret” even after it had been disclosed. (Contrast, incidentally, the Supreme Court’s jurisprudence on individual privacy rights, which often denies any legitimate expectation of privacy in information once revealed to a third party.) Al-Haramain finally prevailed because they were ultimately able to assemble evidence from the public record showing they’d been wiretapped, and the government declined to produce anything resembling a warrant for that surveillance.

If you read over the actual opinion, however it may seem a little anticlimactic—as though something is missing. The ruling concludes that there’s prima facie evidence that Al-Haramain and their lawyers were wiretapped, that the government has failed to produce a warrant, and that this violates the Foreign Intelligence Surveillance Act. But of course, there was never any question about that. Not even the most strident apologists for the NSA program denied that it contravened FISA; rather, they offered a series of rationalizations for why the president was entitled to disregard a federal statute.

There was the John Yoo argument that the president essentially becomes omnipotent during wartime, and that if we can shoot Taliban on a foreign battlefield, surely we can wiretap Americans at home if they seem vaguely Taliban-ish. Even under Bush, the Office of Legal Counsel soon backed away from such… creative… lines of argument. Instead, they relied on the post-9/11 Authorization for the Use of Military Force (AUMF) against al-Qaeda, claiming it had implicitly created a loophole in the FISA law. It was David Kris, now head of DOJ’s National Security Division, who most decisively blew that one out of the water, concluding that it was “essentially impossible” to sustain the government’s reading of the AUMF.

Yet you’ll note that none of these issues arise in Walker’s opinion, because the DOJ, in effect, refused to play. They resisted the court at every step, insisting that a program discussed at length on the front pages of newspapers for years now was so very secret that no aspect of it could be discussed even in a closed setting. They continued to insist on this in the face of repeated court rulings to the contrary. So while Al-Haramain has prevailed, there’s no ruling on the validity of any of those arguments. That’s why I think Marcy Wheeler is probably correct when she predicts that the government will simply take its lumps and pay damages rather than risk an appeal. For one, while Obama administration has been happy to invoke state secrecy as vigorously as its predecessor, it would obviously be somewhat embarrassing for Obama’s DOJ to parrot Bush’s substantive claims of near-limitless executive power. Perhaps more to the point, though, some of those legal arguments may still be operative in secret OLC memos. The FISA Amendments Act aimed to put the unlawful Bush program under court supervision, and even reasserted FISA’s language establishing it as the “exclusive means” for electronic surveillance, which would seem to drive a final stake in the heart of any argument based on the AUMF. But we ultimately don’t know what legal rationales they still consider operative, and it would surely be awkward to have an appellate court knock the legs out from under some of these secret memoranda.

None of this is to deny that the ruling is a big deal—if nothing else because it suggests that the government does not enjoy total carte blanche to shield lawbreaking from review with broad, bald assertions of privilege. But I also know that civil libertarians had hoped that the courts might be the only path to a more full accounting of—and accountability for—the domestic spying program. If the upshot of this is simply that the government must pay a few tens, or even hundreds of thousands of dollars in damages, it’s hard not to see the victory as something of a disappointment.

Wednesday Links

  • Federal judge dismisses charges against Blackwater guards over the killing of 17 in Baghdad. David Isenberg: “The fact that the Blackwater contractors are not getting a trial will only serve to further increase suspicion of and hostility towards security contractors. It is going to be even more difficult for them to gain the trust of local populations or government officials in the countries they work in.”
  • New report shows state and local government workers have higher average compensation levels than private workers.
  • Podcast: “Televising and Subsidizing the Big Game” featuring Neal McCluskey. “Everybody should watch the National College Football Championship because whether you’re interested or not, you are paying for it,” he says.

McCain: Interests of Defense Contractors May Conflict with US National Interest

USA Today reports that retired military officers join the boards of directors of, or become employees of, defense contractors and take home big bags of money doing so.  Not surprising.  At the same time, the paper reports, lots of them are being paid by the Pentagon to be “senior mentors” of their former colleagues. Not being government employees, but rather independent contractors, these folks aren’t subject to government ethics rules.  To take one example, as chairman of BAE Systems, Gen. Anthony Zinni is clearing almost a million a year, in addition to his $129,000 per year government pension.  In addition to all that, the Pentagon pays him about $2,000 per day to “mentor” people at DOD.

As the article points out, information is almost invaluable to the defense contractors in these contexts.  The knowledge of what’s going on at DOD is extremely useful for planners at the defense companies, and so while the retired officers are protesting that being paid nearly $2,000 per day by DOD for their work as mentors is “way below the industry average,” it increases their value to, and presumably their compensation from, their military-industrial employers.  As one coordinator of the mentors program told the retired officers, “you’re getting paid in two ways–monetarily and informationally.”

This isn’t too surprising a story, but the crowning irony comes as Sen. John McCain calls for an ethics rewrite and offers his view that “the important thing is that [the involved officers] avoid the appearance of conflict.” This is a puzzling remark coming from a man whose top foreign-policy adviser was collecting hundreds of thousands of dollars from the Georgian government to lobby McCain at the same time he was being paid by McCain to advise him on foreign policy.

McCain’s thoughts about conflict of interest in that instance?  He was “so proud” of his lobbyist-cum-adviser.  Presumably once McCain issued his ridiculous “today we are all Georgians” fatwa it became a patriotic duty to take money from foreign governments to represent their interests.  But in the case of the proposed reforms–which would attempt to institute some semblance of transparency in these mentoring deals–one can only wish the senator from Arizona the best.

The FISA Amendments: Behind the Scenes

I’ve been poring over the trove of documents the Electronic Frontier Foundation has obtained detailing the long process by which the FISA Amendments Act—which substantially expanded executive power to conduct sweeping surveillance with little oversight—was hammered out between Hill staffers and lawyers at the Department of Justice and intelligence agencies. The really interesting stuff, of course, is mostly redacted, and I’m only partway though the stacks, but there are a few interesting tidbits so far.

As Wired has already reported, one e-mail shows Bush officials feared that if the attorney general was given too much discretion over retroactive immunity for telecoms that aided in warrantless wiretapping, the next administration might refuse to provide it.

A couple other things stuck out for me. First, while it’s possible they’ve been released before and simply not crossed my desk, there are a series of position papers — so rife with  underlining that they look like some breathless magazine subscription pitch — circulated to Congress explaining the Bush administration’s opposition to various proposed amendments to the FAA. Among these was a proposal by Sen. Russ Feingold (D-WI) that would have barred “bulk collection” of international traffic and required that the broad new intelligence authorizations specify (though not necessarily by name) individual targets. The idea here was that if there were particular suspected terrorists (for instance) being monitored overseas, it would be fine to keep monitoring their communications if they began talking with Americans without pausing to get a full-blown warrant — but you didn’t want to give NSA carte blanche to just indiscriminately sweep in traffic between the U.S. and anyone abroad. The position paper included in these documents is more explicit than the others that I’ve seen about the motive for objecting to the bulk collection amendment. Which was, predictably, that they wanted to do bulk collection:

  • It also would prevent the intelligence community from conducting the types of intelligence collection necessary to track terrorits and develop new targets.
  • For example, this amendment could prevent the intelligence community from targeting a particular group of buildings or a geographic area abroad to collect foreign intelligence prior to operations by our armed forces.

So to be clear: Contra the rhetoric we heard at the time, the concern was not simply that NSA would be able to keep monitoring a suspected terrorist when he began calling up Americans. It was to permit the “targeting” of entire regions, scooping all communications between the United States and the chosen area.

One other exchange at least raises an eyebrow.  If you were following the battle in Congress at the time, you may recall that there was a period when the stopgap Protect America Act had expired — though surveillance authorized pursuant to the law could continue for many months — and before Congress approved the FAA. A week into that period, on February 22, 2008, the attorney general and director of national intelligence sent a letter warning Congress that they were now losing intelligence because providers were refusing to comply with new requests under existing PAA authorizations. A day later, they had to roll that back, and some of the correspondence from the EFF FOIA record makes clear that there was an issue with a single recalcitrant provider who decided to go along shortly after the letter was sent.

But there’s another wrinkle. A week prior to this, just before the PAA was set to expire, Jeremy Bash, the chief counsel for the House Permanent Select Committee on Intelligence, sent an email to “Ken and Ben,” about a recent press conference call. It’s clear from context that he’s writing to Assistant Attorney General Kenneth Wainstein and General Counsel for the Director of National Intelligence Ben Powell about this press call, where both men fairly clearly suggest that telecoms are balking for fear that they’ll no longer be immune from liability for participation in PAA surveillance after the statute lapses. Bash wants to confirm whether they really said that “private sector entities have refused to comply with PAA certifications because they were concerned that the law was temporary.” In particular, he wants to know whether this is actually true, because “the briefs I read provided a very different rationale.”  In other words, Bash — who we know was cleared for the most sensitive information about NSA surveillance — was aware of some service providers being reluctant to comply with “new taskings” under the law, but not because of the looming expiration of the statute. One of his correspondents — whether Wainstein or Powell is unclear — shoots back denying having said any such thing (read the transcript yourself) and concluding with a terse:

Not addressing what is in fact the situation on both those issues (compliance and threat to halt) on this email.

In other words, the actual compliance issues they were encountering would have to be discussed over a more secure channel. If the issue wasn’t the expiration, though, what would the issue have been? The obvious alternative possibility is that NSA (or another agency) was attempting to get them to carry out surveillance that they thought might fall outside the scope of either the PAA or a particular authorization. Given how sweeping these were, that should certainly give us pause. It should also raise some questions as to whether, even before that one holdout fell into compliance, the warning letter from the AG and the DNI was misleading. Was there really ever a “gap” resulting from the statute’s sunset, or was it a matter of telecoms balking at an attempt by the intelligence community to stretch the bounds of their legal authority? The latter would certainly fit a pattern we saw again and again under the Bush administration: break the law, inducing a legal crisis, then threaten bloody mayhem if the unlawful program is forced to abruptly halt — at which point a nervous Congress grants its blessing.

Some Thoughts on the New Surveillance

Last night I spoke at “The Little Idea,” a mini-lecture series launched in New York by Ari Melber of The Nation and now starting up here in D.C., on the incredibly civilized premise that, instead of some interminable panel that culminates in a series of audience monologues-disguised-as-questions, it’s much more appealing to have a speaker give a ten-minute spiel, sort of as a prompt for discussion, and then chat with the crowd over drinks.

I’d sketched out a rather longer version of my remarks in advance just to make sure I had my main ideas clear, and so I’ll post them here, as a sort of preview of a rather longer and more formal paper on 21st century surveillance and privacy that I’m working on. Since ten-minute talks don’t accommodate footnotes very well, I should note that I’m drawing for a lot of these ideas on the excellent work of legal scholars Lawrence Lessig and Daniel Solove (relevant papers at the links). Anyway, the expanded version of my talk after the jump:

Since this is supposed to be an event where the drinking is at least as important as the talking, I want to begin with a story about booze—the story of a guy named Roy Olmstead.  Back in the days of Prohibition, Roy Olmstead was the youngest lieutenant on the Seattle police force. He spent a lot of his time busting liquor bootleggers, and in the course of his duties, he had two epiphanies. First, the local rum runners were disorganized—they needed a smart kingpin who’d run the operation like a business. Second, and more importantly, he realized liquor smuggling paid a lot better than police work.

So Roy Olmstead decided to change careers, and it turned out he was a natural. Within a few years he had remarried to a British debutante, bought a big white mansion, and even ran his own radio station—which he used to signal his ships, smuggling hooch down from Canada, via coded messages hidden in broadcasts of children’s bedtime stories. He did retain enough of his old ethos, though, that he forbade his men from carrying guns. The local press called him the Bootleg King of Puget Sound, and his parties were the hottest ticket in town.

Roy’s success did not go unnoticed, of course, and soon enough the feds were after him using their own clever high-tech method: wiretapping. It was so new that they didn’t think they needed to get a court warrant to listen in on phone conversations, and so when the hammer came down, Roy Olmstead challenged those wiretaps in a case that went all the way to the Supreme Court—Olmstead v. U.S.

The court had to decide whether these warrantless wiretaps had violated the Fourth Amendment “right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures.” But when the court looked at how a “search” had traditionally been defined, they saw that it was tied to the common law tort of trespass. Originally, that was supposed to be your remedy if you thought your rights had been violated, and a warrant was a kind of shield against a trespass lawsuit. So the majority didn’t see any problem: “There was no search,” they wrote, “there was no seizure.” Because a search was when the cops came on to your property, and a seizure was when they took your stuff. This was no more a search than if the police had walked by on the sidewalk and seen Roy unpacking a crate of whiskey through his living room window: It was just another kind of non-invasive observation.

So Olmstead went to jail, and came out a dedicated evangelist for Christian Science. It wasn’t until the year after Olmstead died, in 1967, that the Court finally changed its mind in a case called Katz v. U.S.: No, they said, the Fourth Amendment protects people and not places, and so instead of looking at property we’re going to look at your reasonable expectation of privacy, and on that understanding, wiretaps are a problem after all.

So that’s a little history lesson—great, so what? Well, we’re having our own debate about surveillance as Congress considers not just reauthorization of some expiring Patriot Act powers, but also reform of the larger post-9/11 surveillance state, including last year’s incredibly broad amendments to the Foreign Intelligence Surveillance Act. And I see legislators and pundits repeating two related types of mistakes—and these are really conceptual mistakes, not legal mistakes—that we can now, with the benefit of hindsight, more easily recognize in the logic of Olmstead: One is a mistake about technology; the other is a mistake about the value of privacy.

First, the technology mistake. The property rule they used in Olmstead was founded on an assumption about the technological constraints on observation. The goal of the Fourth Amendment was to preserve a certain kind of balance between individual autonomy and state power. The mechanism for achieving that goal was a rule that established a particular trigger or tripwire that would, in a sense, activate the courts when that boundary was crossed in order to maintain the balance. Establishing trespass as the trigger made sense when the sphere of intimate communication was coextensive with the boundaries of your private property. But when technology decoupled those two things, keeping the rule the same no longer preserved the balance, the underlying goal, in the same way, because suddenly you could gather information that once required trespass without hitting that property tripwire.

The second and less obvious error has to do with a conception of the value of privacy, and a corresponding idea of what a privacy harm looks like.  You could call the Olmstead court’s theory “Privacy as Seclusion,” where the paradigmatic violation is the jackboot busting down your door and disturbing the peace of your home. Wiretapping didn’t look like that, and so in one sense it was less intrusive—invisible, even. In another sense, it was more intrusive because it was invisible: Police could listen to your private conversations for months at a time, with you none the wiser. The Katz court finally understood this; you could call their theory Privacy as Secrecy, where the harm is not intrusion but disclosure.

But there’s an even less obvious potential harm here. If they didn’t need a warrant, everyone who made a phone call would know that they could whenever they felt like it. Wiretapping is expensive and labor intensive enough that realistically they can only be gathering information about a few people at a time.  But if further technological change were to remove that constraint, then the knowledge of the permanent possibility of surveillance starts having subtle effects on people’s behavior—if you’ve seen the movie The Lives of Others you can see an extreme case of an ecology of constant suspicion—and that persists whether or not you’re actually under surveillance.  To put it in terms familiar to Washingtonians: Imagine if your conversations had to be “on the record” all the time. Borrowing from Michel Foucault, we can say the privacy harm here is not (primarily) invasion or disclosure but discipline. This idea is even embedded in our language: When we say we want to control and discipline these police powers, we talk about the need for over-sight and super-vision, which are etymologically basically the same word as sur-veillance.

Move one more level from the individual and concrete to the abstract and social harms, and you’ve got the problem (or at least the mixed blessing) of what I’ll call legibility. The idea here is that the longer term possibilities of state control—the kinds of power that are even conceivable—are determined in the modern world by the kind and quantity of information the modern state has, not about discrete individuals, but about populations.  So again, to reach back a few decades, the idea that maybe it would be convenient to round up all the Americans of Japanese ancestry—or some other group—and put them in internment camps is just not even on the conceptual menu unless you have a preexisting informational capacity to rapidly filter and locate your population that way.

Now, when we talk about our First Amendment right to free speech, we understand it has a certain dual character: That there’s an individual right grounded in the equal dignity of free citizens that’s violated whenever I’m prohibited from expressing my views. But also a common or collective good that is an important structural precondition of democracy. As a citizen subject to democratic laws, I have a vested interest in the freedom of political discourse whether or not I personally want to say–or even listen to–controversial speech. Looking at the incredible scope of documented intelligence abuses from the 60s and 70s, we can add that I have an interest in knowing whether government officials are trying to silence or intimidate inconvenient journalists, activists, or even legislators. Censorship and arrest are blunt tactics I can see and protest; blackmail or a calculated leak that brings public disgrace are not so obvious. As legal scholar Bill Stuntz has argued, the Founders understood the structural value of the Fourth Amendment as a complement to the First, because it is very hard to make it a crime to pray the wrong way or to discuss radical politics if the police can’t arbitrarily see what people are doing or writing in their homes.

Now consider how we think about our own contemporary innovations in search technology. The marketing copy claims PATRIOT and its offspring “update” investigative powers for the information age—but what we’re trying to do is stretch our traditional rules and oversight mechanisms to accommodate search tools as radically novel now as wiretapping was in the 20s. On the traditional model, you want information about a target’s communications and conduct, so you ask a judge to approve a method of surveillance, using standards that depend on how intrusive the method is and how secret and sensitive the information is. Constrained by legal rulings from a very different technological environment, this model assumes that information held by third parties—like your phone or banking or credit card information—gets very little protection, since it’s not really “secret” anymore. And the sensitivity of all that information is evaluated in isolation, not in terms of the story that might emerge from linking together all the traces we now inevitable leave in the datasphere every day.

The new surveillance typically seeks to observe information about conduct and communications in order to identify targets. That may mean using voiceprint analysis to pull matches for a particular target’s voice or a sufficiently unusual regional dialect in a certain area. It may mean content analysis to flag e-mails or voice conversations containing known terrorist code phrases. It may mean social graph analysis to reidentify targets who have changed venues by their calling patterns.  If you’re on Facebook, and a you and bunch of your friends all decide to use fake names when you sign up for Twitter, I can still reidentify you given sufficient computing power and strong algorithms by mapping the shape of the connections between you—a kind of social fingerprinting. It can involve predictive analysis based on powerful electronic “classifiers” that extract subtle patterns of travel or communication or purchases common to past terrorists in order to write their own algorithms for detecting potential ones.

Bracket for the moment whether we think some or all of these methods are wise.  It should be crystal clear that a method of oversight designed for up front review and authorization of target-based surveillance is going to be totally inadequate as a safeguard for these new methods.  It will either forbid them completely or be absent from the parts of the process where the dangers to privacy exist. In practice what we’ve done is shift the burden of privacy protection to so-called “minimization” procedures that are meant to archive or at least anonymize data about innocent people. But those procedures have themselves been rendered obsolete by technologies of retrieval and reidentification: No sufficiently large data set is truly anonymous.

And realize the size of the data sets we’re talking about. The FBI’s Information Data Warehouse holds at least 1.5 billion records, and growing fast, from an array of private and government sector sources—some presumably obtained using National Security Letters and Patriot 215 orders, some by other means. Those NSLs are issued by the tens of thousands each year, mostly for information about Americans.  As of 2006, we know “some intelligence sources”—probably NSA’s—were  growing at a rate of 4 petabytes, that’s 4 million Gigabytes—each month.  Within about five years, NSA’s archive is expected to be measured in Yottabytes—if you want to picture one Yottabyte, take the sum total of all data on the Internet—every web page, audio file, and video—and multiply it by 2,000. At that point they will have to make up a new word for the next largest unit of data.  As J. Edgar Hoover understood all too well, just having that information is a form of power. He wasn’t the most feared man in Washington for decades because he necessarily had something on everyone—though he had a lot—but because he had so much that you really couldn’t be sure what he had on you.

There is, to be sure, a lot to be said against the expansion of surveillance powers over the past eight years from a more conventional civil liberties perspective.  But we also need to be aware that if we’re not attuned to the way new technologies may avoid our would tripwires, if we only think of privacy in terms of certain familiar, paradigmatic violations—the boot in the door—then like the Olmstead court, we may render ourselves blind to equally serious threats that don’t fit our mental picture of a privacy harm.

If we’re going to avoid this, we need to attune ourselves to the ways modern surveillance is qualitatively different from past search tools, even if words like “wiretap” and “subpoena” remain the same. And we’re going to need to stop thinking only in terms of isolated violations of individual rights, but also consider the systemic and structural effects of the architectures of surveillance we’re constructing.

Topics: