Tag: surveillance

NSA Spying and the Illusion of Oversight

Last week, the House Judiciary Committee hurtled toward reauthorization of a controversial spying law with a loud-and-clear declaration: not only do we have no idea how many American citizens are caught in the NSA’s warrantless surveillance dragnet, we don’t care—so please don’t tell us! By a 20–11 majority, the panel rejected an amendment that would have required the agency’s inspector general to produce an estimate of the number of Americans whose calls and e-mails were vacuumed up pursuant to broad “authorizations” under the FISA Amendments Act.

The agency’s Inspector General has apparently claimed that producing such an estimate would be “beyond the capacity of his office” and (wait for it) “would itself violate the privacy of U.S. persons.” This is hard to swallow on its face: there might plausibly be difficulties identifying the parties to intercepted e-mail communications, but at least for traditional phone calls, it should be trivial to tally up the number of distinct phone lines with U.S. area codes that have been subject to interception.

If the claim is even partly accurate, however, this should in itself be quite troubling. In theory, the FAA is designed to permit algorithmic surveillance of overseas terror suspects—even when they communicate with Americans. (Traditionally, FISA left surveillance of wholly foreign communications unregulated, but required a warrant when at least one end of a wire communication was in the United States.) But FAA surveillance programs must be designed to “prevent the intentional acquisition of any communication as to which the sender and all intended recipients are known at the time of the acquisition to be located in the United States”—a feature the law’s supporters tout to reassure us they haven’t opened the door to warrantless surveillance of purely domestic communications. The wording leaves a substantial loophole, though. “Persons” as defined under FISA covers groups and other corporate entities, so an interception algorithm could easily “target persons” abroad but still flag purely domestic communications—a concern pointedly raised by the former head of the Justice Department’s National Security Division. The “prevent the intentional acquisition” language is meant to prevent that. Attorney General Eric Holder has made it explicit that the point of the FAA is precisely to allow eavesdropping on broad “Categories” of surveillance targets, defined by general search criteria, without having to identify individual targets. But, of course, if the NSA routinely sweeps up communications in bulk without any way of knowing where the endpoints are located, then it never has to worry about violating the “known at the time of acquisition” clause. Indeed, we already know that “overcollection” of purely domestic communications occurred on a large scale, almost immediately after the law came into effect.

If we care about the spirit as well as the letter of that constraint being respected, it ought to be a little disturbing that the NSA has admitted it doesn’t have any systematic mechanism for identifying communications with U.S. endpoints. Similar considerations apply to the “minimization procedures” which are supposed to limit the retention and dissemination of information about U.S. persons: How meaningfully can these be applied if there’s no systematic effort to detect when a U.S. person is party to a communication? If this is done, even if only for the subset of communications reviewed by human analysts, why can’t that sample be used to generate a ballpark estimate for the broader pool of intercepted messages? How can the Senate report on the FAA extension seriously tout “extensive” oversight of the law’s implementation when it lacks even these elementary figures? If it is truly impossible to generate those figures, isn’t that a tacit admission that meaningful oversight of these incredible powers is also impossible?

Here’s a slightly cynical suggestion: Congress isn’t interested in demanding the data here because it might make it harder to maintain the pretense that the FAA is all about “foreign” surveillance, and therefore needn’t provoke any concern about domestic civil liberties. A cold hard figure confirming that large numbers of Americans are being spied on under the program would make such assurances harder to deliver with a straight face. The “overcollection” of domestic traffic by NSA reported in 2009 may have encompassed “millions” of communications, and still constituted only a small fraction of the total—which suggests that we could be dealing with a truly massive number.

In truth, the “foreign targeting” argument was profoundly misleading. FISA has never regulated surveillance of wholly foreign communications: if all you’re doing is listening in on calls between foreigners in Pakistan and Yemen, you don’t even need the broad authority provided by the FAA. FISA and the FAA only need to come into play when one end of the parties to the communication is a U.S. person—and perhaps for e-mails stored in the U.S. whose ultimate destination is unknown. Just as importantly, when you’re talking about large scale, algorithm-based surveillance, it’s a mistake to put too much weight on “targeting” in the initial broad acquisition stage. If the first stage of your acquisition algorithm says “intercept all calls and e-mails between New York and Pakistan,” that will be kosher for FAA purposes provided the nominal target is the Pakistan side, but will entail spying on just as many Americans as foreigners in practice. If we knew just how many Americans, the FAA might not enjoy such a quick, quiet ride to reauthorization.

On Breach of Decorum and Government Growth

Last week, the Center for Democracy and Technology changed its position on CISPA, the Cyber Intelligence Sharing and Protection Act, two times in short succession, easing the way for House passage of a bill profoundly threatening to privacy.

Declan McCullagh of C|Net wrote a story about it called “Advocacy Group Flip-Flops Twice Over CISPA Surveillance Bill.” In it, he quoted me saying: “A lot of people in Washington, D.C. think that working with CDT means working for good values like privacy. But CDT’s number one goal is having a seat at the table. And CDT will negotiate away privacy toward that end.”

That comment netted some interesting reactions. Some were gleeful about this “emperor-has-no-clothes” moment for CDT. To others, I was inappropriately “insulting” to the good people at CDT. This makes the whole thing worthy of further exploration. How could I say something mean like that about an organization whose staff spend so much time working in good faith on improving privacy protections? Some folks there absolutely do. This does not overcome the institutional role CDT often plays, which I have not found so creditable. (More on that below. Far below…)

First, though, let me illustrate how CDT helped smooth the way for passage of the bill:

Congress is nothing if not ignorant about cybersecurity. It has no idea what to do about the myriad problems that exist in securing computers, networks, and data. So its leaders have fixed on “information sharing” as a panacea.

Because the nature and scope of the problems are unknown, the laws that stand in the way of relevant information sharing are unknown. The solution? Scythe down as much law as possible. (What’s actually needed, most likely, is a narrow amendment to ECPA. Nothing of the sort is yet in the offing.) But this creates a privacy problem: an “information sharing” bill could facilitate promiscuous sharing of personal information with government agencies, including the NSA.

On the House floor last week, the leading Republican sponsor of CISPA, Mike Rogers (R-MI), spoke endlessly about privacy and civil liberties, the negotiations, and the process he had undertaken to try to resolve problems in the privacy area. At the close of debate on the rule that would govern debate on the bill, he said:

The amendments that are following here are months of negotiation and work with many organizations—privacy groups. We have worked language with the Center for Democracy and Technology, and they just the other day said they applauded our progress on where we’re going with privacy and civil liberties. So we have included a lot of folks.

You see, just days before, CDT had issued a blog post saying that it would “not oppose the process moving forward in the House.” The full text of that sentence is actually quite precious because it shows how little CDT got in exchange for publicly withdrawing opposition to the bill. Along with citing “good progress,” CDT president and CEO Leslie Harris wrote:

Recognizing the importance of the cybersecurity issue, in deference to the good faith efforts made by Chairman Rogers and Ranking Member Ruppersberger, and on the understanding that amendments will be considered by the House to address our concerns, we will not oppose the process moving forward in the House.

Cybersecurity is an important issue—nevermind whether the bill would actually help with it. The leadership of the House Intelligence Committee have acted in good faith. And amendments will evidently be forthcoming in the House. So go ahead and pass a bill not ready to become law, in light of “good progress.”

Then CDT got spun.

As McCullagh tells it:

The bill’s authors seized on CDT’s statement to argue that the anti-CISPA coalition was fragmenting, with an aide to House Intelligence Committee Chairman Mike Rogers (R-Mich.) sending reporters e-mail this morning, recalled a few minutes later, proclaiming: “CDT Drops Opposition to CISPA as Bill Moves to House Floor.” And the Information Technology Industry Council, which is unabashedly pro-CISPA, said it “applauds” the “agreement between CISPA sponsors and CDT.”

CDT quickly reversed itself, but the damage was done. Chairman Rogers could make an accurate but misleading floor statement omitting the fact that CDT had again reversed itself. This signaled to members of Congress and their staffs—who don’t pay close attention to subtle shifts in the views of organizations like CDT—that the privacy issues were under control. They could vote for CISPA without getting privacy blow-back. Despite furious efforts by groups like the Electronic Frontier Foundation and the ACLU, the bill passed 248 to 168.

Defenders of CDT will point out—accurately—that it argued laboriously for improvements to the bill. And with the bill’s passage inevitable, that was an essential benefit to the privacy side.

Well, yes and no. To get at that question, let’s talk about how groups represent the public’s interests in Washington, D.C. We’ll design a simplified representation game with the following cast of characters:

  • one powerful legislator, antagonistic to privacy, whose name is “S.R. Veillance”;
  • twenty privacy advocacy groups (Groups A through T); and
  • 20,000 people who rely on these advocacy groups to protect their privacy interests.

At the outset, the 20,000 people divide their privacy “chits”—that is, their donations and their willingness to act politically—equally among the groups. Based on their perceptions of the groups’ actions and relevance, the people re-assign their chits each legislative session.

Mr. Veillance has an anti-privacy bill he would like to get passed, but he knows it will meet resistance if he doesn’t get 2,500 privacy chits to signal that his bill isn’t that bad. If none of the groups give him any privacy chits, his legislation will not pass, so Mr. Veillance goes from group to group bargaining in good faith and signaling that he intends to do all he can to pass his bill. He will reward the groups that work with him by including such groups in future negotiations on future bills. He will penalize the groups that do not by excluding them from future negotiations.

What we have is a game somewhat like the prisoner’s dilemma in game theory. Though it is in the best interest of the society overall for the groups to cooperate and hold the line against a bill, individual groups can advantage themselves by “defecting” from the interests of all. These defectors will be at the table the next time an anti-privacy bill is negotiated.

Three groups—let’s say Group C, Group D, and Group T—defect from the pack. They make deals with Mr. Veillance to improve his bill, and in exchange they give him their privacy chits. He uses their 3,000 chits to signal to his colleagues that they can vote for the bill without fear of privacy-based repercussions.

At the end of the first round, Mr. Veillance has passed his anti-privacy legislation (though weakened, from his perspective). Groups C, D, and T did improve the bill, making it less privacy-invasive than it otherwise would have been, and they have also positioned themselves to be more relevant to future privacy debates because they will have a seat at the table. Hindsight makes the passage of the bill look inevitable, and CDT looks all the wiser for working with Sir Veillance while others futilely opposed the bill.

Thus, having defected, CDT is now able to get more of people’s privacy chits during the next legislative session, so they have more bargaining power and money than other privacy groups. That bargaining power is relevant, though, only if Mr. Veillance moves more bills in the future. To maintain its bargaining power and income, it is in the interest of CDT to see that legislation passes regularly. If anti-privacy legislation never passes, CDT’s unique role as a negotiator will not be valued and its ability to gather chits will diminish over time.

CDT plays a role in “improving” individual pieces of legislation to make them less privacy-invasive and it helps to ensure that improved—yet still privacy-invasive—legislation passes. Over the long run, to keep its seat at the table, CDT bargains away privacy.

This highly simplified representation game repeats itself across many issue-dimensions in every bill, and it involves many more, highly varied actors using widely differing influence “chits.” The power exchanges and signaling among parties ends up looking like a kaleidoscope rather than the linear story of an organization subtly putting its own goals ahead of the public interest.

Most people working in Washington, D.C., and almost assuredly everyone at CDT, have no awareness that they live under the collective action problem illustrated by this game. This is why government grows and privacy recedes.

In his article, McCullagh cites CDT founder Jerry Berman’s role in the 1994 passage of CALEA, the Communications Assistance to Law Enforcement Act. I took particular interest in CDT’s 2009 backing of the REAL ID revival bill, PASS ID. In 2006, CDT’s Jim Dempsey helped give privacy cover to the use of RFID in identification documents contrary to the principle that RFID is for products, not people. A comprehensive study of CDT’s institutional behavior to confirm or deny my theory of its behavior would be very complex and time-consuming.

But divide and conquer works well. My experience is that CDT is routinely the first defector from the privacy coalition despite the earnest good intentions of many individual CDTers. And it’s why I say, perhaps in breach of decorum, things like: “A lot of people in Washington, D.C. think that working with CDT means working for good values like privacy. But CDT’s number one goal is having a seat at the table. And CDT will negotiate away privacy toward that end.”

The Lives of Others 2.0

Tattoo it on your forearm—or better, that of your favorite legislator—for easy reference in the next debate over wiretapping: government surveillance is a security breach—by definition and by design. The latest evidence of this comes from Germany, where there’s growing furor over a hacker group’s allegations that government-designed Trojan Horse spyware is not only insecure, but packed with functions that exceed the limits of German law:

On Saturday, the CCC (the hacker group) announced that it had been given hard drives containing “state spying software,” which had allegedly been used by German investigators to carry out surveillance of Internet communication. The organization had analyzed the software and found it to be full of defects. They also found that it transmitted information via a server located in the United States. As well as its surveillance functions, it could be used to plant files on an individual’s computer. It was also not sufficiently protected, so that third parties with the necessary technical skills could hijack the Trojan horse’s functions for their own ends. The software possibly violated German law, the organization said.

Back in 2004–2005, software designed to facilitate police wiretaps was exploited by unknown parties to intercept the communications of dozens of top political officials in Greece. And just last year, we saw an attack on Google’s e-mail system targeting Chinese dissidents, which some sources have claimed was carried out by compromising a backend interface designed for law enforcement.

Any communications architecture that is designed to facilitate outsider access to communications—for all the most noble reasons—is necessarily more vulnerable to malicious interception as a result. That’s why technologists have looked with justified skepticism on periodic calls from intelligence agencies to redesign data networks for their convenience. At least in this case, the vulnerability is limited to specific target computers on which the malware has been installed. Increasingly, governments want their spyware installed at the switches—making for a more attractive target, and more catastrophic harm in the event of a successful attack.

Stalking the Secret Patriot Act

Since this spring’s blink-and-you-missed-it debate over reauthorization of several controversial provisions of the Patriot Act, Senators Ron Wyden (D-OR) and Mark Udall (D-CO) have been complaining to anyone who’d listen about a “Secret Patriot Act“—an interpretation of one of the law’s provisions by the classified Foreign Intelligence Surveillance Court granting surveillance powers exceeding those an ordinary person would understand to be conferred from the text of the statute itself. As I argued at the time, there is an enormous amount of strong circumstantial evidence suggesting that this referred to a “sensitive collection program” involving cell phone location tracking—potentially on a mass scale—using Patriot’s “Section 215” or “business records” authority.

Lest anyone think they’d let the issue drop, Wyden and Udall last week released a sharply-worded letter to Attorney General Eric Holder, blasting the Justice Department for misleading the public about the scope of the government’s surveillance authority. The real audience for an open letter of this sort, of course, is not the nominal recipient, but rather the press and the public. Beyond simply reminding us that the issue exists, the letter confirms for the first time that the “secret law” of which the senators had complained does indeed involve Section 215. But there are some additional intriguing morsels for the attentive surveillance wonk.

The letter focuses particularly on “highly misleading” statements by Justice Department officials analogizing Section 215 powers to grand jury subpoenas. “As you know,” Wyden and Udall write, “Section 215 authorities are not interpreted in the same way that grand jury subpoena authorities are, and we are concerned that when Justice Department officials suggest that the two authorities are ‘analogous’ they provide the public with a false understanding of how surveillance law is interpreted in practice.”

Now, this is a little curious on its face. Ever since the original debate over the passage of the Patriot Act, its defenders have tried to claim that a variety of provisions allowing the FBI to more easily obtain sensitive records and documents were no big deal, because grand juries have long enjoyed similarly broad subpoena powers. The comparison has been specious all along: grand juries are an arm of the judicial branch designed (at leas in theory) to serve as a buffer between the power of prosecutors and the citizenry. It exists for the specific purpose of determining whether grounds for a criminal indictment exist, and is granted those broad subpoena powers precisely on the premise that it is not just another executive branch investigative agency. To argue, then, that it would make no difference if the FBI or the police could secretly exercise the same type of authority is to miss the point of how our system of government is meant to work in a pretty stunning way. It’s akin to suggesting that, since juries can sentence people to life in prison, it would be no big deal to give the president or the director of the FBI the same power.

That’s not what Wyden and Udall are stressing here, however. Rather, they seem to be suggesting that the scope of the 215 authority itself has been secretly interpreted in a way that goes beyond the scope of the grand jury subpoena power. Now that ought to be striking, because the grand jury’s power to compel the production of documents really is quite broad. Yet, what Wyden and Udall appear to be suggesting is that there is some kind of limit or restriction that does apply to grand jury subpoenas, but has been held by the secret court not to apply to Section 215 orders. One possibility is that the FISC may have seen fit to issue prospective 215 orders, imposing an ongoing obligation on telecommunications companies or other recipients to keep producing records related to a target as they’re created, rather than being limited to records and documents already in existence. But given the quantity of evidence that already suggests the “Secret Patriot Act” involves location tracking, I find it suggestive that the very short list of specific substantive limits on grand jury subpoena power in the U.S. Attorneys’ Manual includes this:

It is improper to utilize the grand jury solely as an investigative aid in the search for a fugitive in whose testimony the grand jury has no interest. In re Pedro Archuleta, 432 F. Supp. 583 (S.D.N.Y. 1977); In re Wood, 430 F. Supp. 41 (S.D.N.Y. 1977), aff’d sub nom In re Cueto, 554 F.2d 14 (2d Cir. 1977). … Since indictments for unlawful flight are rarely sought, it would be improper to routinely use the grand jury in an effort to locate unlawful flight fugitives.

As the manual makes clear, the constraints on the power of the grand jury generally are determined by its purpose and function, but locating subjects for the benefit of law enforcement (rather than as a means of securing their testimony before the grand jury) is one of the few things so expressly and specifically excluded. Could this be what Wyden and Udall are obliquely referring to?

On a possibly related note, the Director of National Intelligence’s office sent Wyden and Udall a letter back in July rebuffing his request for information about the legal standard governing geolocation tracking by the intelligence community. While refusing to get into specifics, the letter explains that “there have been a diverse set of rulings concerning the quantum of evidence and the procedures required to obtain such evidence.” Now, a bit of common sense here: it is inconceivable that any judge on the secret court would not permit cell phone geolocation tracking of a target who was the subject of a full-blown FISA electronic surveillance warrant based on probable cause. There would be no “diversity” if the intelligence agencies were uniformly using only that procedure and that “quantum of evidence.” This claim only makes sense if the agencies have sought and, under some circumstances, obtained authorization to track cell phones pursuant to some other legal process requiring a lower evidentiary showing. (Again, you would not have “diversity” if the court had consistently responded to all such requests with: “No, get a warrant.”)

The options here are pretty limited, because the Foreign Intelligence Surveillance Act only provides for a few different kinds of orders to be issued by the FISC. There’s a full electronic surveillance warrant, requiring a probable cause showing that the target is an “agent of a foreign power.” There’s a warrant for physical search, with the same standard, which doesn’t seem likely to be relevant to geotracking. The only other real options are so-called “pen register” orders, which are used to obtain realtime communications metadata, and Section 215. Both require only that the information sought be “relevant” to an ongoing national security investigation. For pen registers, the applicant need only “certify” that this is the case, which leaves judges with little to do beyond rubber-stamping orders. Section 215 orders require a “statement of facts showing that there are reasonable grounds” to think the information sought is “relevant,” but the statute also provides that any records are automatically relevant if they pertain to a suspected “agent of a foreign power,” or to anyone “in contact with, or known to” such an agent, or to the “activities of a suspected agent of a foreign power who is the subject of [an] authorized investigation.” The only way there can logically be “a diverse set of rulings” about the “quantum of evidence and the procedures required” to conduct cell phone location tracking is if the secret court has, on at least some occasions, allowed it under one or both of those authorities. Perhaps ironically, then, this terse response is not far short of a confirmation.

In criminal investigations, as I noted in a previous post, the Justice Department normally seeks a full warrant in order to do highly accurate, 24-hour realtime location, though it is not clear they believe this is constitutionally required. With a court order for the production of records based on “specific and articulable facts,” they can get call records generally indicating the location of the nearest cell tower when a call was placed—a much less precise and intrusive form of tracking, but one that is increasingly revealing as providers store more data and install ever more cell towers. For realtime tracking that is less precise, they’ll often seek to bundle a records order with a pen register order, to create a “hybrid” tracking order. Judges are increasingly concluding that these standards do not adequately protect constitutional privacy interests, but you’d expect a”diverse set of rulings” if the FISC had adopted a roughly parallel set of rules—except, of course, that the standards for the equivalent orders on the intelligence side are a good deal more permissive. The bottom line, though, is that this makes it all but certain the intelligence agencies are secretly tracking people—and potentially large numbers of people—who it does not have probable cause to believe, and may not even suspect, are involved in terrorism or espionage. No wonder Wyden and Udall are concerned.

Moral Panic and Your Privacy

Want to understand a big chunk of what Washington, D.C. does? Learn about “moral panic.”

Moral panic is a dynamic in the political and media spheres in which some threat to social order—often something taboo—causes a response that goes far beyond meeting the actual threat. It’s a socio-political stampede, if you will. You might be surprised to learn how easily stampeded your society is.

Take a look at H.R. 1981, the Protecting Children from Internet Pornographers Act of 2011. It’s got everything: porn, children, the Internet. And it’s got everything: financial services providers dragooned into law enforcement, data retention requirements heaped on Internet service providers, expanded “administrative subpoena” authority. (Administrative subpoenas are an improvisation to accommodate the massive power of the bureaucracy, and they’ve become another end-run around the Fourth Amendment. If it’s “administrative” it must be reasonable, goes the non-thinking…)

This isn’t a bill about child predation. It’s a bald-faced attack on privacy and limited government. Congress can move legislation like this, even in the era of the Tea Party movement, because child predation is a taboo subject. The inference is too strong in too many minds that opposing government in-roads on privacy is somehow supporting child exploitation. Congress and its allies use taboos to cow the populace into accepting yet more government growth and yet more surveillance.

I’m not turned to mush by taboos, so the question I’m most interested in having asked at tomorrow’s hearing on the bill in the House Judiciary Committee is: “Under what theory of the Commerce Clause is this bill within the power of the federal government?”

FBI’s New Guidelines Further Loosen Constraints on Monitoring

The New York Times’s Charlie Savage reports that the FBI is preparing to release a new Domestic Investigations and Operations Guide (DIOG), further relaxing the rules governing the Bureau’s investigation of Americans who are not suspected of any wrongdoing.

This comes just three years after the last major revision of FBI manual, which empowered agents to employ a broad range of investigative techniques in exploratory “assessments” of citizens or domestic groups, even in the absence of allegations or evidence of wrongdoing, which are needed to open an “investigation.” The FBI assured Congress that it would conduct intensive training, and test agents to ensure that they understood the limits of the new authority—but the Inspector General found irregularities suggestive of widespread cheating on those tests.

Agents can already do quite a bit even without opening an “assessment”: They can consult the government’s own massive (and ever-growing) databases, or search the public Internet for “open source” intelligence. If, however, they want to start digging through state and local law enforcement records, or plumb the vast quantities of information held by commercial data aggregators like LexisNexis or Acxiom, they currently do have to open an assessment. Again, that doesn’t mean they’ve got to have evidence—or even an allegation—that their target is doing anything illegal, but it does mean they’ve got to create a paper trail and identify a legitimate purpose for their inquiries. That’s not much of a limitation, to be sure, but it does provide a strong deterrent to casual misuse of those databases for personal reasons. That paper trail means an agent who might be tempted to use government resources for personal ends—to check up on an ex or a new neighbor—has good reason to think twice.

Removing that check means there will be a lot more digging around in databases without any formal record of why. Even though most of those searches will be legitimate, that makes the abuses more likely to get lost in the crowd. Indeed, a series of reports by the Inspector General’s Office finding “widespread and serious misuse” of National Security Letters, noted that lax recordkeeping made it extremely difficult to accurately gauge the seriousness of the abuses or their true extent—and, of course, to hold the responsible parties accountable. Moreover, the most recent of those reports strongly suggests that agents engaged in illegal use of so-called “exigent letters” resisted the introduction of new records systems precisely because they knew (or at least suspected) their methods weren’t quite kosher.

The new rules will also permit agents to rifle through a person’s garbage when conducting an “assessment” of someone they’d like to recruit as an informant or mole. The reason, according to the Times, is that “they want the ability to use information found in a subject’s trash to put pressure on that person to assist the government in the investigation of others.” Not keen into being dragooned into FBI service? Hope you don’t have anything embarrassing in your dumpster! Physical surveillance squads can only be assigned to a target once, for a limited time, in the course of an assessment under the current rules—that limit, too, falls by the wayside in the revised DIOG.

The Bureau characterizes the latest round of changes as “tweaks” to the most recent revisions. That probably understates the significance of some of the changes, but one reason it’s worrying to see another bundle of revisions so soon after the last overhaul is precisely that it’s awfully easy to slip a big aggregate change under the radar by breaking it up into a series of “tweaks.”

We’ve seen such a move already with respect to National Security Letters, which enable access to a wide array of sensitive financial, phone, and Internet records without a court order—as long as the information is deemed relevant to an “authorized investigation.” When Congress massively expanded the scope of these tools under the USA Patriot Act, legislators understood that to mean full investigations, which must be based on “specific facts” suggesting that a crime is being committed or that a threat to national security exists. Just two years later, the Attorney General’s guidelines were quietly changed to permit the use of NSLs during “preliminary” investigations, which need not meet that standard. Soon, more than half of the NSLs issued each year were used for such preliminary inquiries (though they aren’t available for mere “assessments”… yet).

The FBI, of course, prefers to emphasize all the restrictions that remain in place. We’ll probably have to wait a year or two to see which of those get “tweaked” away next.

Atlas Bugged: Why the “Secret Law” of the Patriot Act Is Probably About Location Tracking

Barack Obama’s AutoPen has signed another four-year extension of three Patriot Act powers, but one silver lining of this week’s lopsided battle over the law is that mainstream papers like The New York Times have finally started to take note of the growing number of senators who have raised an alarm over a “secret interpretation” of Patriot’s “business records” authority (aka Section 215). It would appear to be linked to a “sensitive collection program” referenced by a Justice Department official at hearings during the previous reauthorization debate—one that would be disrupted if 215 orders were restricted to the records of suspected terrorists, their associates, or their “activities” (e.g., large purchases of chemicals used to make bombs). Naturally, lots of people are starting to wonder just what this program, and the secret interpretation of the law that may be associated with it, are all about.

All we can do is speculate, of course: only a handful of legislators and people with top-secret clearances know for sure. But a few of us who closely monitor national security and surveillance issues have come to the same conclusion: it probably involves some form of cellular phone geolocation tracking, potentially on a large scale. The evidence for this is necessarily circumstantial, but I think it’s fairly persuasive when you add it all up.

First, a bit of background. The recent fiery floor speeches from Sens. Wyden and Udall are the first time widespread attention has been drawn to this issue—but it was actually first broached over a year ago, by Sen. Richard Durbin and then-Sen. Russ Feingold, as I point out in my new paper on Patriot surveillance. Back in 2005, language that would have required Section 215 business record orders to pertain to terror suspects, or their associates, or the “activities” of a terror group won the unanimous support of the Senate Judiciary Committee, though was not ultimately included in the final reauthorization bill. Four years later, however, the Justice Department was warning that such a requirement would interfere with that “sensitive collection program.” As Durbin complained at the time:

The real reason for resisting this obvious, common-sense modification of Section 215 is unfortunately cloaked in secrecy. Some day that cloak will be lifted, and future generations will ask whether our actions today meet the test of a democratic society: transparency, accountability, and fidelity to the rule of law and our Constitution.

Those are three pretty broad categories of information—and it should raise a few eyebrows to learn that the Justice Department believes it routinely needs to get information outside its scope for counterterror investigations. Currently, any record asserted to be “relevant” to an investigation (a standard so low it’s barely a standard) is subject to Section 215, and records falling within those three categories enjoy a “presumption of relevance.” That means the judges on the secret Foreign Intelligence Surveillance Court lack discretion to evaluate for themselves whether such records are really relevant to an investigation; they must presume their relevance. With that in mind, consider that the most recent report to Congress on the use of these powers shows a record 96 uses of Section 215 in 2010, up from 22 the previous year. Perhaps most surprisingly though, the FISC saw fit to “modify” (which almost certainly means “narrow the scope of”) 42 of those orders. Since the court’s discretion is limited with respect to records of suspected terrorists and their associates, it seems probable that those “modifications” involved applications for orders that sweep more broadly. But why would such records be needed? Hold that thought.

Fast forward to this week. We hear Sen. Wyden warning that “When the American people find out how their government has secretly interpreted the Patriot Act, they will be stunned and they will be angry,” a warning echoed by Sen. Udall. We know that this surprising and disturbing interpretation concerns one of the three provisions that had been slated for sunset. Lone Wolf remains unused, so that’s out, leaving roving wiretaps and Section 215. In the context of remarks by Sens. Feingold and Durbin, and the emphasis recently placed on concerns about Section 215 by Sen. Udall, the business records provision seems like a safe bet. By its explicit terms, that authority is already quite broad: What strained secret interpretation of it could be surprising to both legislators and the general public, but also meet with the approval of the FISC and the Office of Legal Counsel?

For one possible answer, look to the criminal context, where the Department of Justice has developed a novel legal theory, known as the “hybrid theory,” according to which law enforcement may do some types of geolocation tracking of suspects’ cellular phones without obtaining a full-blown probable cause warrant. The “hybrid theory” involves fusing two very different types of surveillance authority. “Pen registers” allow the monitoring, in real time, of the communications “metadata” from phones or other communications devices (phone numbers dialed, IP addresses connected to). For cellular phones, that “metadata” would often make it possible to pinpoint at least approximately—and, increasingly, with a good deal of precision, especially in urban areas—the location of the user. Federal law, however, prohibits carriers from disclosing location information “solely” pursuant to a pen register order. Another type of authority, known as a 2703(d) order, is a bit like Patriot’s business records authority (though only for telecommunications providers), and is used to compel the production of historical (as opposed to real-time/prospective) records, without any exclusion on location information. The Justice Department’s novel theory—which I discussed at a recent Cato event with Sen. Wyden on geolocation tracking—is that by bundling these two authorities in a new kind of combination order, they can do real-time geolocation tracking without the need to obtain a full Fourth Amendment warrant based on probable cause. Many courts have been skeptical of this theory and rejected it—but at least some have gone along with this clever bit of legal origami. Using the broad business records power of Patriot’s Section 215 in a similar way, to enable physical tracking of anyone with a cellphone, would seem to fit the bill, then: certainly surprising and counterintuitive, not what most people think of when we talk about “obtaining business records,” but nevertheless a maneuver with a legal track record of convincing some courts.

Now, consider that Sen. Wyden has also recently developed a concern with the practice of mobile location tracking, which has become so popular that the U.S. Marshall Service, now the federal government’s most prolific (known) user of pen register orders, of which it issued over 6,000 last year, employs the “hybrid theory” to obtain location information by default with each such order. Wyden has introduced legislation that would establish standards for mobile location tracking, which has two surprising and notable feature. First, while the location tracking known to the public all involves criminal investigations subject to the Electronic Communications Privacy Act (ECPA), that’s not where Wyden’s bill makes its primary modifications. Instead, the key amendments are made directly to the Foreign Intelligence Surveillance Act—which language is then incorporated by reference into ECPA. Second, even though one section establishes the “exclusive means” for geolocation tracking, the proposal goes out of its way to additionally modify the FISA pen register provision and the Section 215 business records provision to explicitly prohibit their use to obtain geolocation information—as though there is some special reason to worry about those provisions being used that way, requiring any possible ambiguity to be removed.

Sen. Udall, meanwhile, always uses the same two examples when he talks about his concerns regarding Section 215: he warns about “unfettered” government access to “business records ranging from a cell phone company’s phone records to an individual’s library history,” even when the records relate to people with no connection to terrorism.  The reference to libraries is no surprise, because the specter of Section 215 being used to probe people’s reading habits was raised so insistently by librarians that it became common to see it referenced as the “library provision.” The other example is awfully specific though: he singles out cell phone records, even though many types of sensitive phone records can already be obtained without judicial oversight using National Security Letters. But he doesn’t just say “phone records”—it’s cell phone records he’s especially concerned about. And where he talks about “an individual’s” library records, he doesn’t warn about access to “an individual’s” cell phone records, but rather the company’s records.  As in, the lot of them.

Tracking the location of suspected terrorists, and perhaps their known associates, might not seem so objectionable—though one could argue whether Section 215’s “relevance” standard was sufficient, or whether a full FISA electronic surveillance warrant (requiring a showing of probable cause) would be a more appropriate tool. But that kind of targeted tracking would not require broad access to records of people unconnected to terror suspects and their known associates, which is hinted at by both Sen. Udall’s remarks and the high rate of modifications imposed on Section 215 orders by the FISA court. Why might that be needed in the course of a geolocation tracking program?

For a possible answer, turn to the “LocInt” or “Location Intelligence” services marketed to U.S. law enforcement and national security clients by the firm TruePosition. Among the capabilities the company boasts for its software (drawn from both its site and a 2008 white paper the company sponsored) are:

● the ability to analyze location intelligence to detect suspicious behavioral patterns,
● the ability to mine historical mobile phone data to detect relationships between people, locations, and events,
● TruePosition LOCINT can mine location data to find out if the geoprofile of a prepaid phone matches the geoprofile of a potential threat and identify it as such, and
● leveraging location intelligence, officials can identify mobile phones of interest that frequently communicate with each other, or are within close proximity, making it easier to identify criminals and their associates. [Emphasis added.]

Certainly one can see how these functions might be useful: terrorists trained in counterintelligence tactics might seek to avoid surveillance, or identification of co-conspirators, by communicating only in person. Calling records would be useless for revealing physical meetings—but location records are another story. What these functions have in common, however, is that like any kind of data mining, they require access to a large pool of data, not just the records of a known suspect. You can find out who your suspect is phoning by looking at his phone records. But if you want to know who he’s in close physical proximity to—with unusual frequency, and most likely alone—you need to sift through everyone’s phone location records, or at any rate a whole lot of them.  The interesting thing is, it’s not obvious there’s any legal way to actually do all that: full-fledged electronic surveillance warrants would be a non-starter, since they require probable cause for each target. But clearly the company expects to be able to sell these capabilities to some government entity. The obvious candidate is the FBI, availing itself of the broad authority of Section 215—perhaps in combination with FISA pen registers when the tracking needs to happen in real time.

As a final note of interest, the Office of the Inspector Generals’ reports on National Security Letter contain numerous oblique references to “community of interest [REDACTED]” requests. Traditional “community of interest” analysis means looking at the pattern of communications of not just the primary suspect of an investigation, but their whole social circle—the people the suspect communicates with, and perhaps the people they in turn communicate with, and so on. Apparently the fact that the FBI does this sort of traditional CoI analysis is not considered secret, because that phrase remains unredacted. What, then, could that single omitted word be? One candidate that would fit in the available space is “location” or “geolocation”—meaning either location tracking of people called by the suspect or perhaps the use of location records to build a suspect’s “community of interest” by “identify[ing] mobile phones…within close proximity” to the suspects. The Inspector General reports cover the first few years following passage of the Patriot Act, before an opinion from the Office of Legal Counsel held that NSLs could not properly be used to obtain the full range of communications metadata the FBI had been getting under them. If NSLs had been used for location-tracking information prior to that 2008 opinion, it would likely have been necessary to rely on Section 215 past that point, which would fit the timeline.

Is all of that conclusive? Of course not; again, this is speculation. But a lot of data points fit, and it would be quite surprising if the geolocation capabilities increasingly being called upon for criminal investigations were not being used for intelligence purposes. If they are, Section 215 is the natural mechanism.

Even if I’m completely wrong, however, the larger point remains: while intelligence operations must remain secret, a free and democratic society is not supposed to be governed by secret laws—and substantive judicial interpretations are no less a part of “the law” than the text of statutes. Whatever power the government has arrogated to itself by an “innovative” interpretation of the Patriot Act, it should be up to a free citizenry to consider the case for it, determine whether it is so vital to security to justify the intrusion on privacy, and hold their representatives accountable accordingly. Instead, Congress has essential voted blind—reauthorizing powers that even legislators, let alone the public, do not truly understand. Whether it’s location tracking or something else, this is fundamentally incompatible with the preconditions of both democracy and a free society.