Tag: surveillance

What 9/11 Should Teach Us

As a fan of comedian Dennis Miller, I was astonished to discover that he became a supporter of U.S. government policies in fighting terrorism after the September 11th attacks. Perhaps I am in the minority on this issue, but the 9/11 attacks were what helped to erode my faith in government.

Few people bring this up, but in 2004, a CIA Inspector General report found a number of weaknesses in the Intelligence Community’s pre-9/11 counterterrorism practices, many of which “contributed to performance lapses related to the handling of materials concerning individuals who were to become the 9/11 hijackers.” Two al Qaeda terrorists who later became 9/11 hijackers, Nawaf al-Hazmi and Khalid al-Mihdhar, had attended a meeting of suspected terrorists in Malaysia in early 2000. The Inspector General probe uncovered that the CIA had learned that one of the operatives had a U.S. visa, and the other had flown from Bangkok to Los Angeles.

Yet, the Agency failed to forward that relevant information by “entering the names of suspected al-Qa’ida terrorists on the ‘watchlist’ of the Department of State and providing information to the Federal Bureau of Investigation (FBI) in proper channels.” Some 50 to 60 individuals—including Headquarters personnel, overseas officers, managers, and junior employees—had read the cables containing the travel information on al-Hazmi and al-Mihdhar.

The report said in a stark assessment, “The consequences of the failures to share information and perform proper operational follow-through on these terrorists were potentially significant.” Indeed. Had the names been passed to the FBI and the State Department through proper channels, the operatives could have been watchlisted and surveilled. In theory, those steps could have yielded information on financing, flight training, and other details vital to unraveling the 9/11 plot.

Corroborating these findings was a Joint Inquiry Report by the Senate Select Committee on Intelligence and the House Permanent Select Committee on Intelligence. It found “persistent problems” with the “lack of collaboration between Intelligence Community agencies.” About the FBI in particular, the report went so far as to say as late as December 2002 that “…the Bureau–-as a law enforcement organization–-is fundamentally incapable, in its present form, of providing Americans with the security they require against foreign terrorist and intelligence threats.” Now that is a ringing endorsement of our government’s ability to protect us.

We often hear that the failure of 9/11 was government-wide. But few observers delve into why it failed, especially on 9/11 anniversaries, when, one would think, such explanations would be most helpful. A number of structural factors impede effective collaboration. For instance, many intelligence agencies operate under different legal authorities. Many of them have distinct customers and cultures, and jealously guard their turf, budgets, sources, and methods. Individuals within various agencies also share information by relying on trust and personal relationships.

Yet, dispersed knowledge made it so that there was no single person or “silver bullet” that could have enabled intelligence agencies to prevent the 9/11 attacks. As the CIA Inspector General report made clear, neither the U.S. government nor the Intelligence Community had a comprehensive strategic plan to guide counterterrorism efforts. Amid the pre-9/11 flurry of warnings, intelligence cables, and briefing materials on al Qaeda’s plot to hijack airliners and ram them into our buildings, a significant failure, concluded the 9/11 Commission, was one of imagination.

After 9/11, many Americans were quick to cede yet more power to government. While much has changed in eleven years, with agencies less reluctant to share critical data, a February 2011 Government Accountability Office report noted that the government “does not yet have a fully-functioning Information Sharing Environment,” that is, “an approach that facilitates the sharing of terrorism and homeland security information”:

GAO found that the government had begun to implement some initiatives that improved sharing but did not yet have a comprehensive approach that was guided by an overall plan and measures to help gauge progress and achieve desired results.

Over the decade, while our government focused narrowly on the problem of terrorism, it also embraced ambitious, wasteful, and counterproductive programs and policies that drained us economically and spread our resources thin. After 9/11, excluding the invasions and occupations of Iraq and Afghanistan, American taxpayers have shelled out over $1 trillion dollars for their sprawling counterterrorism-industrial-complex, replete with its thousands of federal, state, and local government organizations and the private companies that work with them.

Perhaps it is unsurprising that our government expanded after an attack that called into question its primary constitutional function: protecting our country. What is more remarkable is that the public continues to accept humiliating pat-downs and invasive full-body scans for airline travel, costly grant programs rolled out by the Department of Homeland Security, and reckless politicians who advocate endless wars against predominately-Muslim states that play directly into al Qaeda’s hands.

Now, many Americans ask: Are we safer? Certainly, but marginal increases in safety have come at an exceptionally high cost, have far exceeded the point of diminished returns, and have encouraged a terrorized public to exalt a government that failed them.

What We Can and Can’t Know About NSA Spying: A Reply to Prof. Cordero

Georgetown Law professor Carrie Cordero—who previously worked at the Department of Justice improving privacy procedures for monitoring under the Foreign Intelligence Surveillance Act—attended our event with Sen. Ron Wyden (D-OR) on the FISA Amendments Act last week.  Perhaps unsurprisingly, she’s rather more comfortable with the surveillance authorized by the law than our speakers were, and posted some critical commentary at the Lawfare blog (which is, incidentally, required reading for national security and intelligence buffs). Marcy Wheeler has already posted her own reply, but I’d like to hit a few points as well. Here’s Cordero:

Since at least the summer of 2011, [Wyden and Sen. Mark Udall] have been pushing the Intelligence Community to provide more public information about how the FAA works, and how it affects the privacy rights of Americans. In particular, they have, in a series of letters, requested that the Executive Branch provide an estimate of the number of Americans incidentally intercepted during the course of FAA surveillance. According to the exchanges of letters, the Executive Branch has repeatedly denied the request, on the basis that: i) it would be an unreasonable burden on the workforce (and, presumably, would take intelligence professionals off their national security mission); and ii) gathering the data the senators are requesting would, in and of itself, violate privacy rights of Americans.

The workforce argument, even if true, is, of course, a loser. The question of whether the data call itself would violate privacy rights is a more interesting one. Multiple oversight personnel independent of the operational and analytical wings of the Intelligence Community – including the Office of Management and Budget, the NSA Inspector General, and just last month, the Inspector General of the Intelligence Community, have all said that the data call requested by the senators is not feasible. The other members of the SSCI appear to accept this claim on its face. Meanwhile, Senator Wyden states he just finds the claim unbelievable. That there must be some way it can be done, he says, if even on a sample basis. Maintaining that position puts him in an interesting place, however: is the privacy advocate actually advocating for violating the privacy rules, to appease a Congressional request? Assuming that he would not actually want to advocate that the rules be waived at the request of a politician, a question then arises as to whether the Intelligence Community has adequately explained exactly how the data call would work and why it would conflict with existing privacy rules and protections, such as minimization procedures.

I’ll grant Cordero this point: as absurd as it sounds to say “we can’t tell you how many Americans we’re spying on, because it would violate their privacy,” this might well be a concern if those of us who follow these issues from the outside are correct in our surmises about what NSA is doing under FAA authority. The only real restriction the law places on the initial interception of communications is that the NSA use “targeting procedures” designed to capture traffic to or from overseas groups and individuals. There’s an enormous amount of circumstantial evidence to suggest that initial acquisition is therefore extremely broad, with a large percentage of international communications traffic being fed into NSA databases for later querying. If that’s the case, then naturally the tiny subset of communications later reviewed by a human analyst—because they match far narrower criteria for suspicion—is going to be highly unrepresentative. To get even a rough statistical sample of what’s in the larger database, then, one would have to “inspect”—possibly using software—a whole lot of the innocent communications that wouldn’t otherwise ever be analyzed. And possibly the rules currently in place don’t make any allowance for querying the database—even to analyze metadata for the purpose of generating aggregate statistics—unless it’s directly related to an intelligence purpose.

A few points about this.  First: assuming, for the moment, that  this is the case, why can’t NSA and DOJ say so clearly and publicly? Because it would somehow imperil national security to characterize the surveillance program even at this highest level of generality, without any mention of particular search parameters or targets? Would it “help the terrorists” if they answered a more recent query from a bipartisan group of senators, asking whether database searches (as opposed to initial “targeting”) had focused on specific American citizens?  Please.

A  more plausible hypothesis is that they recognize that an official, public acknowledgement that the government is routinely copying and warehousing millions of completely innocent communications—even if they’re only looking at the “suspicious” minority— would not go over entirely smoothly with the citizenry. There might even be a demand for some public debate about whether this is the kind of thing we’re willing to countenance. Legal scholars might become curious whether whatever arguments support the constitutionality of this practice hold up as well in the light of the day as they do when they’re made unopposed in closed chambers. Even without an actual estimate, any meaningful discussion of the workings of the program would be likely to undermine the whole pretense that it only “incidentally” involves the communications of innocent Americans, or that the constraints on “targeting”constitute a meaningful safeguard.  The desire to avoid the whole hornet’s nest using the pretext of national security is perhaps understandable, but it shouldn’t be acceptable in a democracy. Yet everyone knows overclassification is endemic—even the government’s own former “classification czar” has blasted the government’s use of inappropriate secrecy as a weapon against critics.

Second, transparency at this level of generality is an essential component of privacy protection. To the extent that the rules governing  access to the database preclude any attempt to audit its aggregate contents—including by automated software tallying of identifiers such as area codes and IP addresses—then they should indeed be changed, not because a senator demanded it, but because they otherwise preclude adequate oversight. An online service that keeps no server logs would be somewhat more protective of its users privacy… if  its database were otherwise perfectly secure against intrusion or misuse. In the real world, where there’s no such thing as perfect security, such a service would be protecting user privacy extremely poorly, because it would lack the ability to detect and prevent breaches. If it is not possible to audit the NSA’s system in this way, then that system needs to be altered until it is possible. If giving Congress a rough sense of the extent of the agency’s surveillance of Americans falls outside the parameters of the intelligence mission (and therefore the permissible uses of the database), it’s time for a new mission statement.

Finally, Cordero closes by noting the SSCI has touted its own oversight as “extensive” and “robust,” which Cordero thinks “debunks” the  suggestion embedded in our event title that the FAA enables “mass spying without accountability.”  (Can I debunk the debunking by lauding the accuracy and thoroughness of my own analysis?)  Unfortunately, the consensus of most independent analysts of the intelligence committees’ performance is a good deal less sanguine—which makes me hesitant to take that self-assessment at face value.

As scholars frequently point out, the overseers are asked to process incredibly complex information with a limited cleared staff to assist them, and often forbidden to take notes at briefings or remove reports from secure facilities. When you read about those extensive reports, recall that in the run-up to the invasion of Iraq only six senators and a handful of representatives ever read past the executive summary of the National Intelligence Estimate on Iraq’s WMD programs to the far more qualified language of the  full 92-page report. You might think the intel committees would need to hold more hearings than their counterparts to compensate for these disadvantages, but UCLA’s Amy Zegart has found that they consistently rank at the bottom of the pack, year after year. Little wonder, then, that years of flagrant and systemic misuse of another controversial surveillance tool—National Security Letters—was not uncovered by the “extensive” and “robust” oversight of the intelligence committees, but by the Justice Department’s inspector general.

In any event, we seem to have at least 13 senators who don’t believe they’ve been provided with enough information to perform their oversight role adequately. Perhaps they’re setting the bar too high, but I find it more likely that their colleagues—who over time naturally grow to like and trust the intelligence officials upon whom they rely for their information—are a bit too easily satisfied. There are no  prizes for expending time, energy, and political capital on ferreting out civil liberties problems in covert intelligence programs, least of all in an election year. It’s far easier to be satisfied with whatever data the intelligence community deigns to dribble out—often with heroic indifference to statutory reporting deadlines—and take it on faith that everything’s running as smoothly as they say. That allows you to write, and even believe, that you’re conducting “robust” oversight without knowing (as Wyden’s letter suggests the committee members do not) roughly how many Americans are being captured in NSA’s database, how many purely-domestic communications have been intercepted,  whether warrantless “backdoor” targeting of Americans is being done via the selection of database queries. But the public need not be so easily satisfied, nor accept that meaningful “accountability” exists when all those extensive reports leave the overseers ignorant of so many basic facts.

Secret Cell Phone Tracking in the Sunshine State

The South Florida Sun-Sentinel provides us with one more data point showing the growing frequency with which police are using cell phones as tracking devices—a practice whose surprising prevalence the ACLU shone light on in April. In fiscal year 2011-2012, the first year Florida kept tabs on cell location tracking, state authorities made 171 location tracking requests—and apparently hope to expand the program.

The article alludes to a couple of specific cases in which location tracking was employed—to find a murder suspect and a girl who was thought to have been kidnapped—both of which are perfectly legitimate uses of the technology in principle. In general, if there’s enough evidence to issue an arrest warrant, the same evidence should support a warrant for tracking authority when the suspect’s location isn’t immediately known. In cases where police have a good faith belief that there’s a serious emergency—such as a suspected kidnapping—it’s even reasonable to allow police to seek location information without a court order, as is standard practice with most other kinds of electronic records requests. But the Sun-Sentinel report is also unsettlingly vague about the precise legal standard followed in non-emergency cases. According to a law enforcement official quoted in the story, the Florida Department of Law Enforcement’s Electronic Surveillance “always seeks judicial approval to trail someone with GPS,” while the written policy only “instructs agents to show probable cause for criminal activity to the department’s legal counsel to see if a court order is necessary,” implying that it sometimes is not necessary.

The term “court order,” however, is quite broad: the word that’s conspicuously absent from these definitions is “warrant”—an order meeting the Fourth Amendment’s standards. In the past, the Justice Department has argued that many kinds of location tracking may be conducted using other kinds of authority, such as so-called “pen register” and “2403(d)” orders. Unlike full-fledged search warrants, which require a showing of “probable cause” to believe the suspect has committed a crime, these lesser authorities require only “reasonable grounds” to believe the information sought would be “relevant” to some legitimate investigation. That is, needless to say, a far lower hurdle to meet.

Police refusal to discuss the program with reporters is also part of a larger pattern of secrecy surrounding location tracking. As Magistrate Judge Stephen Smith observes in a recent and important paper, such orders are often sealed indefinitely—which in practice means “forever.” Unlike the targets of ordinary wiretaps, who must eventually be informed about the surveillance after the fact, citizens who’ve been lojacked may never learn that the authorities were mapping their every move. Such secrecy may be useful to police—but it also means that improper use of an intrusive power is far less likely to ever come to light.

Location tracking can be a valuable tool for an array of legitimate law enforcement purposes—but especially in light of the Supreme Court’s unanimous decision in United States v. Jones, it has to be governed by clear, uniform standards that satisfy the demands of the Fourth Amendment.

NSA Spying and the Illusion of Oversight

Last week, the House Judiciary Committee hurtled toward reauthorization of a controversial spying law with a loud-and-clear declaration: not only do we have no idea how many American citizens are caught in the NSA’s warrantless surveillance dragnet, we don’t care—so please don’t tell us! By a 20–11 majority, the panel rejected an amendment that would have required the agency’s inspector general to produce an estimate of the number of Americans whose calls and e-mails were vacuumed up pursuant to broad “authorizations” under the FISA Amendments Act.

The agency’s Inspector General has apparently claimed that producing such an estimate would be “beyond the capacity of his office” and (wait for it) “would itself violate the privacy of U.S. persons.” This is hard to swallow on its face: there might plausibly be difficulties identifying the parties to intercepted e-mail communications, but at least for traditional phone calls, it should be trivial to tally up the number of distinct phone lines with U.S. area codes that have been subject to interception.

If the claim is even partly accurate, however, this should in itself be quite troubling. In theory, the FAA is designed to permit algorithmic surveillance of overseas terror suspects—even when they communicate with Americans. (Traditionally, FISA left surveillance of wholly foreign communications unregulated, but required a warrant when at least one end of a wire communication was in the United States.) But FAA surveillance programs must be designed to “prevent the intentional acquisition of any communication as to which the sender and all intended recipients are known at the time of the acquisition to be located in the United States”—a feature the law’s supporters tout to reassure us they haven’t opened the door to warrantless surveillance of purely domestic communications. The wording leaves a substantial loophole, though. “Persons” as defined under FISA covers groups and other corporate entities, so an interception algorithm could easily “target persons” abroad but still flag purely domestic communications—a concern pointedly raised by the former head of the Justice Department’s National Security Division. The “prevent the intentional acquisition” language is meant to prevent that. Attorney General Eric Holder has made it explicit that the point of the FAA is precisely to allow eavesdropping on broad “Categories” of surveillance targets, defined by general search criteria, without having to identify individual targets. But, of course, if the NSA routinely sweeps up communications in bulk without any way of knowing where the endpoints are located, then it never has to worry about violating the “known at the time of acquisition” clause. Indeed, we already know that “overcollection” of purely domestic communications occurred on a large scale, almost immediately after the law came into effect.

If we care about the spirit as well as the letter of that constraint being respected, it ought to be a little disturbing that the NSA has admitted it doesn’t have any systematic mechanism for identifying communications with U.S. endpoints. Similar considerations apply to the “minimization procedures” which are supposed to limit the retention and dissemination of information about U.S. persons: How meaningfully can these be applied if there’s no systematic effort to detect when a U.S. person is party to a communication? If this is done, even if only for the subset of communications reviewed by human analysts, why can’t that sample be used to generate a ballpark estimate for the broader pool of intercepted messages? How can the Senate report on the FAA extension seriously tout “extensive” oversight of the law’s implementation when it lacks even these elementary figures? If it is truly impossible to generate those figures, isn’t that a tacit admission that meaningful oversight of these incredible powers is also impossible?

Here’s a slightly cynical suggestion: Congress isn’t interested in demanding the data here because it might make it harder to maintain the pretense that the FAA is all about “foreign” surveillance, and therefore needn’t provoke any concern about domestic civil liberties. A cold hard figure confirming that large numbers of Americans are being spied on under the program would make such assurances harder to deliver with a straight face. The “overcollection” of domestic traffic by NSA reported in 2009 may have encompassed “millions” of communications, and still constituted only a small fraction of the total—which suggests that we could be dealing with a truly massive number.

In truth, the “foreign targeting” argument was profoundly misleading. FISA has never regulated surveillance of wholly foreign communications: if all you’re doing is listening in on calls between foreigners in Pakistan and Yemen, you don’t even need the broad authority provided by the FAA. FISA and the FAA only need to come into play when one end of the parties to the communication is a U.S. person—and perhaps for e-mails stored in the U.S. whose ultimate destination is unknown. Just as importantly, when you’re talking about large scale, algorithm-based surveillance, it’s a mistake to put too much weight on “targeting” in the initial broad acquisition stage. If the first stage of your acquisition algorithm says “intercept all calls and e-mails between New York and Pakistan,” that will be kosher for FAA purposes provided the nominal target is the Pakistan side, but will entail spying on just as many Americans as foreigners in practice. If we knew just how many Americans, the FAA might not enjoy such a quick, quiet ride to reauthorization.

On Breach of Decorum and Government Growth

Last week, the Center for Democracy and Technology changed its position on CISPA, the Cyber Intelligence Sharing and Protection Act, two times in short succession, easing the way for House passage of a bill profoundly threatening to privacy.

Declan McCullagh of C|Net wrote a story about it called “Advocacy Group Flip-Flops Twice Over CISPA Surveillance Bill.” In it, he quoted me saying: “A lot of people in Washington, D.C. think that working with CDT means working for good values like privacy. But CDT’s number one goal is having a seat at the table. And CDT will negotiate away privacy toward that end.”

That comment netted some interesting reactions. Some were gleeful about this “emperor-has-no-clothes” moment for CDT. To others, I was inappropriately “insulting” to the good people at CDT. This makes the whole thing worthy of further exploration. How could I say something mean like that about an organization whose staff spend so much time working in good faith on improving privacy protections? Some folks there absolutely do. This does not overcome the institutional role CDT often plays, which I have not found so creditable. (More on that below. Far below…)

First, though, let me illustrate how CDT helped smooth the way for passage of the bill:

Congress is nothing if not ignorant about cybersecurity. It has no idea what to do about the myriad problems that exist in securing computers, networks, and data. So its leaders have fixed on “information sharing” as a panacea.

Because the nature and scope of the problems are unknown, the laws that stand in the way of relevant information sharing are unknown. The solution? Scythe down as much law as possible. (What’s actually needed, most likely, is a narrow amendment to ECPA. Nothing of the sort is yet in the offing.) But this creates a privacy problem: an “information sharing” bill could facilitate promiscuous sharing of personal information with government agencies, including the NSA.

On the House floor last week, the leading Republican sponsor of CISPA, Mike Rogers (R-MI), spoke endlessly about privacy and civil liberties, the negotiations, and the process he had undertaken to try to resolve problems in the privacy area. At the close of debate on the rule that would govern debate on the bill, he said:

The amendments that are following here are months of negotiation and work with many organizations—privacy groups. We have worked language with the Center for Democracy and Technology, and they just the other day said they applauded our progress on where we’re going with privacy and civil liberties. So we have included a lot of folks.

You see, just days before, CDT had issued a blog post saying that it would “not oppose the process moving forward in the House.” The full text of that sentence is actually quite precious because it shows how little CDT got in exchange for publicly withdrawing opposition to the bill. Along with citing “good progress,” CDT president and CEO Leslie Harris wrote:

Recognizing the importance of the cybersecurity issue, in deference to the good faith efforts made by Chairman Rogers and Ranking Member Ruppersberger, and on the understanding that amendments will be considered by the House to address our concerns, we will not oppose the process moving forward in the House.

Cybersecurity is an important issue—nevermind whether the bill would actually help with it. The leadership of the House Intelligence Committee have acted in good faith. And amendments will evidently be forthcoming in the House. So go ahead and pass a bill not ready to become law, in light of “good progress.”

Then CDT got spun.

As McCullagh tells it:

The bill’s authors seized on CDT’s statement to argue that the anti-CISPA coalition was fragmenting, with an aide to House Intelligence Committee Chairman Mike Rogers (R-Mich.) sending reporters e-mail this morning, recalled a few minutes later, proclaiming: “CDT Drops Opposition to CISPA as Bill Moves to House Floor.” And the Information Technology Industry Council, which is unabashedly pro-CISPA, said it “applauds” the “agreement between CISPA sponsors and CDT.”

CDT quickly reversed itself, but the damage was done. Chairman Rogers could make an accurate but misleading floor statement omitting the fact that CDT had again reversed itself. This signaled to members of Congress and their staffs—who don’t pay close attention to subtle shifts in the views of organizations like CDT—that the privacy issues were under control. They could vote for CISPA without getting privacy blow-back. Despite furious efforts by groups like the Electronic Frontier Foundation and the ACLU, the bill passed 248 to 168.

Defenders of CDT will point out—accurately—that it argued laboriously for improvements to the bill. And with the bill’s passage inevitable, that was an essential benefit to the privacy side.

Well, yes and no. To get at that question, let’s talk about how groups represent the public’s interests in Washington, D.C. We’ll design a simplified representation game with the following cast of characters:

  • one powerful legislator, antagonistic to privacy, whose name is “S.R. Veillance”;
  • twenty privacy advocacy groups (Groups A through T); and
  • 20,000 people who rely on these advocacy groups to protect their privacy interests.

At the outset, the 20,000 people divide their privacy “chits”—that is, their donations and their willingness to act politically—equally among the groups. Based on their perceptions of the groups’ actions and relevance, the people re-assign their chits each legislative session.

Mr. Veillance has an anti-privacy bill he would like to get passed, but he knows it will meet resistance if he doesn’t get 2,500 privacy chits to signal that his bill isn’t that bad. If none of the groups give him any privacy chits, his legislation will not pass, so Mr. Veillance goes from group to group bargaining in good faith and signaling that he intends to do all he can to pass his bill. He will reward the groups that work with him by including such groups in future negotiations on future bills. He will penalize the groups that do not by excluding them from future negotiations.

What we have is a game somewhat like the prisoner’s dilemma in game theory. Though it is in the best interest of the society overall for the groups to cooperate and hold the line against a bill, individual groups can advantage themselves by “defecting” from the interests of all. These defectors will be at the table the next time an anti-privacy bill is negotiated.

Three groups—let’s say Group C, Group D, and Group T—defect from the pack. They make deals with Mr. Veillance to improve his bill, and in exchange they give him their privacy chits. He uses their 3,000 chits to signal to his colleagues that they can vote for the bill without fear of privacy-based repercussions.

At the end of the first round, Mr. Veillance has passed his anti-privacy legislation (though weakened, from his perspective). Groups C, D, and T did improve the bill, making it less privacy-invasive than it otherwise would have been, and they have also positioned themselves to be more relevant to future privacy debates because they will have a seat at the table. Hindsight makes the passage of the bill look inevitable, and CDT looks all the wiser for working with Sir Veillance while others futilely opposed the bill.

Thus, having defected, CDT is now able to get more of people’s privacy chits during the next legislative session, so they have more bargaining power and money than other privacy groups. That bargaining power is relevant, though, only if Mr. Veillance moves more bills in the future. To maintain its bargaining power and income, it is in the interest of CDT to see that legislation passes regularly. If anti-privacy legislation never passes, CDT’s unique role as a negotiator will not be valued and its ability to gather chits will diminish over time.

CDT plays a role in “improving” individual pieces of legislation to make them less privacy-invasive and it helps to ensure that improved—yet still privacy-invasive—legislation passes. Over the long run, to keep its seat at the table, CDT bargains away privacy.

This highly simplified representation game repeats itself across many issue-dimensions in every bill, and it involves many more, highly varied actors using widely differing influence “chits.” The power exchanges and signaling among parties ends up looking like a kaleidoscope rather than the linear story of an organization subtly putting its own goals ahead of the public interest.

Most people working in Washington, D.C., and almost assuredly everyone at CDT, have no awareness that they live under the collective action problem illustrated by this game. This is why government grows and privacy recedes.

In his article, McCullagh cites CDT founder Jerry Berman’s role in the 1994 passage of CALEA, the Communications Assistance to Law Enforcement Act. I took particular interest in CDT’s 2009 backing of the REAL ID revival bill, PASS ID. In 2006, CDT’s Jim Dempsey helped give privacy cover to the use of RFID in identification documents contrary to the principle that RFID is for products, not people. A comprehensive study of CDT’s institutional behavior to confirm or deny my theory of its behavior would be very complex and time-consuming.

But divide and conquer works well. My experience is that CDT is routinely the first defector from the privacy coalition despite the earnest good intentions of many individual CDTers. And it’s why I say, perhaps in breach of decorum, things like: “A lot of people in Washington, D.C. think that working with CDT means working for good values like privacy. But CDT’s number one goal is having a seat at the table. And CDT will negotiate away privacy toward that end.”

The Lives of Others 2.0

Tattoo it on your forearm—or better, that of your favorite legislator—for easy reference in the next debate over wiretapping: government surveillance is a security breach—by definition and by design. The latest evidence of this comes from Germany, where there’s growing furor over a hacker group’s allegations that government-designed Trojan Horse spyware is not only insecure, but packed with functions that exceed the limits of German law:

On Saturday, the CCC (the hacker group) announced that it had been given hard drives containing “state spying software,” which had allegedly been used by German investigators to carry out surveillance of Internet communication. The organization had analyzed the software and found it to be full of defects. They also found that it transmitted information via a server located in the United States. As well as its surveillance functions, it could be used to plant files on an individual’s computer. It was also not sufficiently protected, so that third parties with the necessary technical skills could hijack the Trojan horse’s functions for their own ends. The software possibly violated German law, the organization said.

Back in 2004–2005, software designed to facilitate police wiretaps was exploited by unknown parties to intercept the communications of dozens of top political officials in Greece. And just last year, we saw an attack on Google’s e-mail system targeting Chinese dissidents, which some sources have claimed was carried out by compromising a backend interface designed for law enforcement.

Any communications architecture that is designed to facilitate outsider access to communications—for all the most noble reasons—is necessarily more vulnerable to malicious interception as a result. That’s why technologists have looked with justified skepticism on periodic calls from intelligence agencies to redesign data networks for their convenience. At least in this case, the vulnerability is limited to specific target computers on which the malware has been installed. Increasingly, governments want their spyware installed at the switches—making for a more attractive target, and more catastrophic harm in the event of a successful attack.

Stalking the Secret Patriot Act

Since this spring’s blink-and-you-missed-it debate over reauthorization of several controversial provisions of the Patriot Act, Senators Ron Wyden (D-OR) and Mark Udall (D-CO) have been complaining to anyone who’d listen about a “Secret Patriot Act“—an interpretation of one of the law’s provisions by the classified Foreign Intelligence Surveillance Court granting surveillance powers exceeding those an ordinary person would understand to be conferred from the text of the statute itself. As I argued at the time, there is an enormous amount of strong circumstantial evidence suggesting that this referred to a “sensitive collection program” involving cell phone location tracking—potentially on a mass scale—using Patriot’s “Section 215” or “business records” authority.

Lest anyone think they’d let the issue drop, Wyden and Udall last week released a sharply-worded letter to Attorney General Eric Holder, blasting the Justice Department for misleading the public about the scope of the government’s surveillance authority. The real audience for an open letter of this sort, of course, is not the nominal recipient, but rather the press and the public. Beyond simply reminding us that the issue exists, the letter confirms for the first time that the “secret law” of which the senators had complained does indeed involve Section 215. But there are some additional intriguing morsels for the attentive surveillance wonk.

The letter focuses particularly on “highly misleading” statements by Justice Department officials analogizing Section 215 powers to grand jury subpoenas. “As you know,” Wyden and Udall write, “Section 215 authorities are not interpreted in the same way that grand jury subpoena authorities are, and we are concerned that when Justice Department officials suggest that the two authorities are ‘analogous’ they provide the public with a false understanding of how surveillance law is interpreted in practice.”

Now, this is a little curious on its face. Ever since the original debate over the passage of the Patriot Act, its defenders have tried to claim that a variety of provisions allowing the FBI to more easily obtain sensitive records and documents were no big deal, because grand juries have long enjoyed similarly broad subpoena powers. The comparison has been specious all along: grand juries are an arm of the judicial branch designed (at leas in theory) to serve as a buffer between the power of prosecutors and the citizenry. It exists for the specific purpose of determining whether grounds for a criminal indictment exist, and is granted those broad subpoena powers precisely on the premise that it is not just another executive branch investigative agency. To argue, then, that it would make no difference if the FBI or the police could secretly exercise the same type of authority is to miss the point of how our system of government is meant to work in a pretty stunning way. It’s akin to suggesting that, since juries can sentence people to life in prison, it would be no big deal to give the president or the director of the FBI the same power.

That’s not what Wyden and Udall are stressing here, however. Rather, they seem to be suggesting that the scope of the 215 authority itself has been secretly interpreted in a way that goes beyond the scope of the grand jury subpoena power. Now that ought to be striking, because the grand jury’s power to compel the production of documents really is quite broad. Yet, what Wyden and Udall appear to be suggesting is that there is some kind of limit or restriction that does apply to grand jury subpoenas, but has been held by the secret court not to apply to Section 215 orders. One possibility is that the FISC may have seen fit to issue prospective 215 orders, imposing an ongoing obligation on telecommunications companies or other recipients to keep producing records related to a target as they’re created, rather than being limited to records and documents already in existence. But given the quantity of evidence that already suggests the “Secret Patriot Act” involves location tracking, I find it suggestive that the very short list of specific substantive limits on grand jury subpoena power in the U.S. Attorneys’ Manual includes this:

It is improper to utilize the grand jury solely as an investigative aid in the search for a fugitive in whose testimony the grand jury has no interest. In re Pedro Archuleta, 432 F. Supp. 583 (S.D.N.Y. 1977); In re Wood, 430 F. Supp. 41 (S.D.N.Y. 1977), aff’d sub nom In re Cueto, 554 F.2d 14 (2d Cir. 1977). … Since indictments for unlawful flight are rarely sought, it would be improper to routinely use the grand jury in an effort to locate unlawful flight fugitives.

As the manual makes clear, the constraints on the power of the grand jury generally are determined by its purpose and function, but locating subjects for the benefit of law enforcement (rather than as a means of securing their testimony before the grand jury) is one of the few things so expressly and specifically excluded. Could this be what Wyden and Udall are obliquely referring to?

On a possibly related note, the Director of National Intelligence’s office sent Wyden and Udall a letter back in July rebuffing his request for information about the legal standard governing geolocation tracking by the intelligence community. While refusing to get into specifics, the letter explains that “there have been a diverse set of rulings concerning the quantum of evidence and the procedures required to obtain such evidence.” Now, a bit of common sense here: it is inconceivable that any judge on the secret court would not permit cell phone geolocation tracking of a target who was the subject of a full-blown FISA electronic surveillance warrant based on probable cause. There would be no “diversity” if the intelligence agencies were uniformly using only that procedure and that “quantum of evidence.” This claim only makes sense if the agencies have sought and, under some circumstances, obtained authorization to track cell phones pursuant to some other legal process requiring a lower evidentiary showing. (Again, you would not have “diversity” if the court had consistently responded to all such requests with: “No, get a warrant.”)

The options here are pretty limited, because the Foreign Intelligence Surveillance Act only provides for a few different kinds of orders to be issued by the FISC. There’s a full electronic surveillance warrant, requiring a probable cause showing that the target is an “agent of a foreign power.” There’s a warrant for physical search, with the same standard, which doesn’t seem likely to be relevant to geotracking. The only other real options are so-called “pen register” orders, which are used to obtain realtime communications metadata, and Section 215. Both require only that the information sought be “relevant” to an ongoing national security investigation. For pen registers, the applicant need only “certify” that this is the case, which leaves judges with little to do beyond rubber-stamping orders. Section 215 orders require a “statement of facts showing that there are reasonable grounds” to think the information sought is “relevant,” but the statute also provides that any records are automatically relevant if they pertain to a suspected “agent of a foreign power,” or to anyone “in contact with, or known to” such an agent, or to the “activities of a suspected agent of a foreign power who is the subject of [an] authorized investigation.” The only way there can logically be “a diverse set of rulings” about the “quantum of evidence and the procedures required” to conduct cell phone location tracking is if the secret court has, on at least some occasions, allowed it under one or both of those authorities. Perhaps ironically, then, this terse response is not far short of a confirmation.

In criminal investigations, as I noted in a previous post, the Justice Department normally seeks a full warrant in order to do highly accurate, 24-hour realtime location, though it is not clear they believe this is constitutionally required. With a court order for the production of records based on “specific and articulable facts,” they can get call records generally indicating the location of the nearest cell tower when a call was placed—a much less precise and intrusive form of tracking, but one that is increasingly revealing as providers store more data and install ever more cell towers. For realtime tracking that is less precise, they’ll often seek to bundle a records order with a pen register order, to create a “hybrid” tracking order. Judges are increasingly concluding that these standards do not adequately protect constitutional privacy interests, but you’d expect a”diverse set of rulings” if the FISC had adopted a roughly parallel set of rules—except, of course, that the standards for the equivalent orders on the intelligence side are a good deal more permissive. The bottom line, though, is that this makes it all but certain the intelligence agencies are secretly tracking people—and potentially large numbers of people—who it does not have probable cause to believe, and may not even suspect, are involved in terrorism or espionage. No wonder Wyden and Udall are concerned.