Tag: surveillance

Secret Cell Phone Tracking in the Sunshine State

The South Florida Sun-Sentinel provides us with one more data point showing the growing frequency with which police are using cell phones as tracking devices—a practice whose surprising prevalence the ACLU shone light on in April. In fiscal year 2011-2012, the first year Florida kept tabs on cell location tracking, state authorities made 171 location tracking requests—and apparently hope to expand the program.

The article alludes to a couple of specific cases in which location tracking was employed—to find a murder suspect and a girl who was thought to have been kidnapped—both of which are perfectly legitimate uses of the technology in principle. In general, if there’s enough evidence to issue an arrest warrant, the same evidence should support a warrant for tracking authority when the suspect’s location isn’t immediately known. In cases where police have a good faith belief that there’s a serious emergency—such as a suspected kidnapping—it’s even reasonable to allow police to seek location information without a court order, as is standard practice with most other kinds of electronic records requests. But the Sun-Sentinel report is also unsettlingly vague about the precise legal standard followed in non-emergency cases. According to a law enforcement official quoted in the story, the Florida Department of Law Enforcement’s Electronic Surveillance “always seeks judicial approval to trail someone with GPS,” while the written policy only “instructs agents to show probable cause for criminal activity to the department’s legal counsel to see if a court order is necessary,” implying that it sometimes is not necessary.

The term “court order,” however, is quite broad: the word that’s conspicuously absent from these definitions is “warrant”—an order meeting the Fourth Amendment’s standards. In the past, the Justice Department has argued that many kinds of location tracking may be conducted using other kinds of authority, such as so-called “pen register” and “2403(d)” orders. Unlike full-fledged search warrants, which require a showing of “probable cause” to believe the suspect has committed a crime, these lesser authorities require only “reasonable grounds” to believe the information sought would be “relevant” to some legitimate investigation. That is, needless to say, a far lower hurdle to meet.

Police refusal to discuss the program with reporters is also part of a larger pattern of secrecy surrounding location tracking. As Magistrate Judge Stephen Smith observes in a recent and important paper, such orders are often sealed indefinitely—which in practice means “forever.” Unlike the targets of ordinary wiretaps, who must eventually be informed about the surveillance after the fact, citizens who’ve been lojacked may never learn that the authorities were mapping their every move. Such secrecy may be useful to police—but it also means that improper use of an intrusive power is far less likely to ever come to light.

Location tracking can be a valuable tool for an array of legitimate law enforcement purposes—but especially in light of the Supreme Court’s unanimous decision in United States v. Jones, it has to be governed by clear, uniform standards that satisfy the demands of the Fourth Amendment.

NSA Spying and the Illusion of Oversight

Last week, the House Judiciary Committee hurtled toward reauthorization of a controversial spying law with a loud-and-clear declaration: not only do we have no idea how many American citizens are caught in the NSA’s warrantless surveillance dragnet, we don’t care—so please don’t tell us! By a 20–11 majority, the panel rejected an amendment that would have required the agency’s inspector general to produce an estimate of the number of Americans whose calls and e-mails were vacuumed up pursuant to broad “authorizations” under the FISA Amendments Act.

The agency’s Inspector General has apparently claimed that producing such an estimate would be “beyond the capacity of his office” and (wait for it) “would itself violate the privacy of U.S. persons.” This is hard to swallow on its face: there might plausibly be difficulties identifying the parties to intercepted e-mail communications, but at least for traditional phone calls, it should be trivial to tally up the number of distinct phone lines with U.S. area codes that have been subject to interception.

If the claim is even partly accurate, however, this should in itself be quite troubling. In theory, the FAA is designed to permit algorithmic surveillance of overseas terror suspects—even when they communicate with Americans. (Traditionally, FISA left surveillance of wholly foreign communications unregulated, but required a warrant when at least one end of a wire communication was in the United States.) But FAA surveillance programs must be designed to “prevent the intentional acquisition of any communication as to which the sender and all intended recipients are known at the time of the acquisition to be located in the United States”—a feature the law’s supporters tout to reassure us they haven’t opened the door to warrantless surveillance of purely domestic communications. The wording leaves a substantial loophole, though. “Persons” as defined under FISA covers groups and other corporate entities, so an interception algorithm could easily “target persons” abroad but still flag purely domestic communications—a concern pointedly raised by the former head of the Justice Department’s National Security Division. The “prevent the intentional acquisition” language is meant to prevent that. Attorney General Eric Holder has made it explicit that the point of the FAA is precisely to allow eavesdropping on broad “Categories” of surveillance targets, defined by general search criteria, without having to identify individual targets. But, of course, if the NSA routinely sweeps up communications in bulk without any way of knowing where the endpoints are located, then it never has to worry about violating the “known at the time of acquisition” clause. Indeed, we already know that “overcollection” of purely domestic communications occurred on a large scale, almost immediately after the law came into effect.

If we care about the spirit as well as the letter of that constraint being respected, it ought to be a little disturbing that the NSA has admitted it doesn’t have any systematic mechanism for identifying communications with U.S. endpoints. Similar considerations apply to the “minimization procedures” which are supposed to limit the retention and dissemination of information about U.S. persons: How meaningfully can these be applied if there’s no systematic effort to detect when a U.S. person is party to a communication? If this is done, even if only for the subset of communications reviewed by human analysts, why can’t that sample be used to generate a ballpark estimate for the broader pool of intercepted messages? How can the Senate report on the FAA extension seriously tout “extensive” oversight of the law’s implementation when it lacks even these elementary figures? If it is truly impossible to generate those figures, isn’t that a tacit admission that meaningful oversight of these incredible powers is also impossible?

Here’s a slightly cynical suggestion: Congress isn’t interested in demanding the data here because it might make it harder to maintain the pretense that the FAA is all about “foreign” surveillance, and therefore needn’t provoke any concern about domestic civil liberties. A cold hard figure confirming that large numbers of Americans are being spied on under the program would make such assurances harder to deliver with a straight face. The “overcollection” of domestic traffic by NSA reported in 2009 may have encompassed “millions” of communications, and still constituted only a small fraction of the total—which suggests that we could be dealing with a truly massive number.

In truth, the “foreign targeting” argument was profoundly misleading. FISA has never regulated surveillance of wholly foreign communications: if all you’re doing is listening in on calls between foreigners in Pakistan and Yemen, you don’t even need the broad authority provided by the FAA. FISA and the FAA only need to come into play when one end of the parties to the communication is a U.S. person—and perhaps for e-mails stored in the U.S. whose ultimate destination is unknown. Just as importantly, when you’re talking about large scale, algorithm-based surveillance, it’s a mistake to put too much weight on “targeting” in the initial broad acquisition stage. If the first stage of your acquisition algorithm says “intercept all calls and e-mails between New York and Pakistan,” that will be kosher for FAA purposes provided the nominal target is the Pakistan side, but will entail spying on just as many Americans as foreigners in practice. If we knew just how many Americans, the FAA might not enjoy such a quick, quiet ride to reauthorization.

On Breach of Decorum and Government Growth

Last week, the Center for Democracy and Technology changed its position on CISPA, the Cyber Intelligence Sharing and Protection Act, two times in short succession, easing the way for House passage of a bill profoundly threatening to privacy.

Declan McCullagh of C|Net wrote a story about it called “Advocacy Group Flip-Flops Twice Over CISPA Surveillance Bill.” In it, he quoted me saying: “A lot of people in Washington, D.C. think that working with CDT means working for good values like privacy. But CDT’s number one goal is having a seat at the table. And CDT will negotiate away privacy toward that end.”

That comment netted some interesting reactions. Some were gleeful about this “emperor-has-no-clothes” moment for CDT. To others, I was inappropriately “insulting” to the good people at CDT. This makes the whole thing worthy of further exploration. How could I say something mean like that about an organization whose staff spend so much time working in good faith on improving privacy protections? Some folks there absolutely do. This does not overcome the institutional role CDT often plays, which I have not found so creditable. (More on that below. Far below…)

First, though, let me illustrate how CDT helped smooth the way for passage of the bill:

Congress is nothing if not ignorant about cybersecurity. It has no idea what to do about the myriad problems that exist in securing computers, networks, and data. So its leaders have fixed on “information sharing” as a panacea.

Because the nature and scope of the problems are unknown, the laws that stand in the way of relevant information sharing are unknown. The solution? Scythe down as much law as possible. (What’s actually needed, most likely, is a narrow amendment to ECPA. Nothing of the sort is yet in the offing.) But this creates a privacy problem: an “information sharing” bill could facilitate promiscuous sharing of personal information with government agencies, including the NSA.

On the House floor last week, the leading Republican sponsor of CISPA, Mike Rogers (R-MI), spoke endlessly about privacy and civil liberties, the negotiations, and the process he had undertaken to try to resolve problems in the privacy area. At the close of debate on the rule that would govern debate on the bill, he said:

The amendments that are following here are months of negotiation and work with many organizations—privacy groups. We have worked language with the Center for Democracy and Technology, and they just the other day said they applauded our progress on where we’re going with privacy and civil liberties. So we have included a lot of folks.

You see, just days before, CDT had issued a blog post saying that it would “not oppose the process moving forward in the House.” The full text of that sentence is actually quite precious because it shows how little CDT got in exchange for publicly withdrawing opposition to the bill. Along with citing “good progress,” CDT president and CEO Leslie Harris wrote:

Recognizing the importance of the cybersecurity issue, in deference to the good faith efforts made by Chairman Rogers and Ranking Member Ruppersberger, and on the understanding that amendments will be considered by the House to address our concerns, we will not oppose the process moving forward in the House.

Cybersecurity is an important issue—nevermind whether the bill would actually help with it. The leadership of the House Intelligence Committee have acted in good faith. And amendments will evidently be forthcoming in the House. So go ahead and pass a bill not ready to become law, in light of “good progress.”

Then CDT got spun.

As McCullagh tells it:

The bill’s authors seized on CDT’s statement to argue that the anti-CISPA coalition was fragmenting, with an aide to House Intelligence Committee Chairman Mike Rogers (R-Mich.) sending reporters e-mail this morning, recalled a few minutes later, proclaiming: “CDT Drops Opposition to CISPA as Bill Moves to House Floor.” And the Information Technology Industry Council, which is unabashedly pro-CISPA, said it “applauds” the “agreement between CISPA sponsors and CDT.”

CDT quickly reversed itself, but the damage was done. Chairman Rogers could make an accurate but misleading floor statement omitting the fact that CDT had again reversed itself. This signaled to members of Congress and their staffs—who don’t pay close attention to subtle shifts in the views of organizations like CDT—that the privacy issues were under control. They could vote for CISPA without getting privacy blow-back. Despite furious efforts by groups like the Electronic Frontier Foundation and the ACLU, the bill passed 248 to 168.

Defenders of CDT will point out—accurately—that it argued laboriously for improvements to the bill. And with the bill’s passage inevitable, that was an essential benefit to the privacy side.

Well, yes and no. To get at that question, let’s talk about how groups represent the public’s interests in Washington, D.C. We’ll design a simplified representation game with the following cast of characters:

  • one powerful legislator, antagonistic to privacy, whose name is “S.R. Veillance”;
  • twenty privacy advocacy groups (Groups A through T); and
  • 20,000 people who rely on these advocacy groups to protect their privacy interests.

At the outset, the 20,000 people divide their privacy “chits”—that is, their donations and their willingness to act politically—equally among the groups. Based on their perceptions of the groups’ actions and relevance, the people re-assign their chits each legislative session.

Mr. Veillance has an anti-privacy bill he would like to get passed, but he knows it will meet resistance if he doesn’t get 2,500 privacy chits to signal that his bill isn’t that bad. If none of the groups give him any privacy chits, his legislation will not pass, so Mr. Veillance goes from group to group bargaining in good faith and signaling that he intends to do all he can to pass his bill. He will reward the groups that work with him by including such groups in future negotiations on future bills. He will penalize the groups that do not by excluding them from future negotiations.

What we have is a game somewhat like the prisoner’s dilemma in game theory. Though it is in the best interest of the society overall for the groups to cooperate and hold the line against a bill, individual groups can advantage themselves by “defecting” from the interests of all. These defectors will be at the table the next time an anti-privacy bill is negotiated.

Three groups—let’s say Group C, Group D, and Group T—defect from the pack. They make deals with Mr. Veillance to improve his bill, and in exchange they give him their privacy chits. He uses their 3,000 chits to signal to his colleagues that they can vote for the bill without fear of privacy-based repercussions.

At the end of the first round, Mr. Veillance has passed his anti-privacy legislation (though weakened, from his perspective). Groups C, D, and T did improve the bill, making it less privacy-invasive than it otherwise would have been, and they have also positioned themselves to be more relevant to future privacy debates because they will have a seat at the table. Hindsight makes the passage of the bill look inevitable, and CDT looks all the wiser for working with Sir Veillance while others futilely opposed the bill.

Thus, having defected, CDT is now able to get more of people’s privacy chits during the next legislative session, so they have more bargaining power and money than other privacy groups. That bargaining power is relevant, though, only if Mr. Veillance moves more bills in the future. To maintain its bargaining power and income, it is in the interest of CDT to see that legislation passes regularly. If anti-privacy legislation never passes, CDT’s unique role as a negotiator will not be valued and its ability to gather chits will diminish over time.

CDT plays a role in “improving” individual pieces of legislation to make them less privacy-invasive and it helps to ensure that improved—yet still privacy-invasive—legislation passes. Over the long run, to keep its seat at the table, CDT bargains away privacy.

This highly simplified representation game repeats itself across many issue-dimensions in every bill, and it involves many more, highly varied actors using widely differing influence “chits.” The power exchanges and signaling among parties ends up looking like a kaleidoscope rather than the linear story of an organization subtly putting its own goals ahead of the public interest.

Most people working in Washington, D.C., and almost assuredly everyone at CDT, have no awareness that they live under the collective action problem illustrated by this game. This is why government grows and privacy recedes.

In his article, McCullagh cites CDT founder Jerry Berman’s role in the 1994 passage of CALEA, the Communications Assistance to Law Enforcement Act. I took particular interest in CDT’s 2009 backing of the REAL ID revival bill, PASS ID. In 2006, CDT’s Jim Dempsey helped give privacy cover to the use of RFID in identification documents contrary to the principle that RFID is for products, not people. A comprehensive study of CDT’s institutional behavior to confirm or deny my theory of its behavior would be very complex and time-consuming.

But divide and conquer works well. My experience is that CDT is routinely the first defector from the privacy coalition despite the earnest good intentions of many individual CDTers. And it’s why I say, perhaps in breach of decorum, things like: “A lot of people in Washington, D.C. think that working with CDT means working for good values like privacy. But CDT’s number one goal is having a seat at the table. And CDT will negotiate away privacy toward that end.”

The Lives of Others 2.0

Tattoo it on your forearm—or better, that of your favorite legislator—for easy reference in the next debate over wiretapping: government surveillance is a security breach—by definition and by design. The latest evidence of this comes from Germany, where there’s growing furor over a hacker group’s allegations that government-designed Trojan Horse spyware is not only insecure, but packed with functions that exceed the limits of German law:

On Saturday, the CCC (the hacker group) announced that it had been given hard drives containing “state spying software,” which had allegedly been used by German investigators to carry out surveillance of Internet communication. The organization had analyzed the software and found it to be full of defects. They also found that it transmitted information via a server located in the United States. As well as its surveillance functions, it could be used to plant files on an individual’s computer. It was also not sufficiently protected, so that third parties with the necessary technical skills could hijack the Trojan horse’s functions for their own ends. The software possibly violated German law, the organization said.

Back in 2004–2005, software designed to facilitate police wiretaps was exploited by unknown parties to intercept the communications of dozens of top political officials in Greece. And just last year, we saw an attack on Google’s e-mail system targeting Chinese dissidents, which some sources have claimed was carried out by compromising a backend interface designed for law enforcement.

Any communications architecture that is designed to facilitate outsider access to communications—for all the most noble reasons—is necessarily more vulnerable to malicious interception as a result. That’s why technologists have looked with justified skepticism on periodic calls from intelligence agencies to redesign data networks for their convenience. At least in this case, the vulnerability is limited to specific target computers on which the malware has been installed. Increasingly, governments want their spyware installed at the switches—making for a more attractive target, and more catastrophic harm in the event of a successful attack.

Stalking the Secret Patriot Act

Since this spring’s blink-and-you-missed-it debate over reauthorization of several controversial provisions of the Patriot Act, Senators Ron Wyden (D-OR) and Mark Udall (D-CO) have been complaining to anyone who’d listen about a “Secret Patriot Act“—an interpretation of one of the law’s provisions by the classified Foreign Intelligence Surveillance Court granting surveillance powers exceeding those an ordinary person would understand to be conferred from the text of the statute itself. As I argued at the time, there is an enormous amount of strong circumstantial evidence suggesting that this referred to a “sensitive collection program” involving cell phone location tracking—potentially on a mass scale—using Patriot’s “Section 215” or “business records” authority.

Lest anyone think they’d let the issue drop, Wyden and Udall last week released a sharply-worded letter to Attorney General Eric Holder, blasting the Justice Department for misleading the public about the scope of the government’s surveillance authority. The real audience for an open letter of this sort, of course, is not the nominal recipient, but rather the press and the public. Beyond simply reminding us that the issue exists, the letter confirms for the first time that the “secret law” of which the senators had complained does indeed involve Section 215. But there are some additional intriguing morsels for the attentive surveillance wonk.

The letter focuses particularly on “highly misleading” statements by Justice Department officials analogizing Section 215 powers to grand jury subpoenas. “As you know,” Wyden and Udall write, “Section 215 authorities are not interpreted in the same way that grand jury subpoena authorities are, and we are concerned that when Justice Department officials suggest that the two authorities are ‘analogous’ they provide the public with a false understanding of how surveillance law is interpreted in practice.”

Now, this is a little curious on its face. Ever since the original debate over the passage of the Patriot Act, its defenders have tried to claim that a variety of provisions allowing the FBI to more easily obtain sensitive records and documents were no big deal, because grand juries have long enjoyed similarly broad subpoena powers. The comparison has been specious all along: grand juries are an arm of the judicial branch designed (at leas in theory) to serve as a buffer between the power of prosecutors and the citizenry. It exists for the specific purpose of determining whether grounds for a criminal indictment exist, and is granted those broad subpoena powers precisely on the premise that it is not just another executive branch investigative agency. To argue, then, that it would make no difference if the FBI or the police could secretly exercise the same type of authority is to miss the point of how our system of government is meant to work in a pretty stunning way. It’s akin to suggesting that, since juries can sentence people to life in prison, it would be no big deal to give the president or the director of the FBI the same power.

That’s not what Wyden and Udall are stressing here, however. Rather, they seem to be suggesting that the scope of the 215 authority itself has been secretly interpreted in a way that goes beyond the scope of the grand jury subpoena power. Now that ought to be striking, because the grand jury’s power to compel the production of documents really is quite broad. Yet, what Wyden and Udall appear to be suggesting is that there is some kind of limit or restriction that does apply to grand jury subpoenas, but has been held by the secret court not to apply to Section 215 orders. One possibility is that the FISC may have seen fit to issue prospective 215 orders, imposing an ongoing obligation on telecommunications companies or other recipients to keep producing records related to a target as they’re created, rather than being limited to records and documents already in existence. But given the quantity of evidence that already suggests the “Secret Patriot Act” involves location tracking, I find it suggestive that the very short list of specific substantive limits on grand jury subpoena power in the U.S. Attorneys’ Manual includes this:

It is improper to utilize the grand jury solely as an investigative aid in the search for a fugitive in whose testimony the grand jury has no interest. In re Pedro Archuleta, 432 F. Supp. 583 (S.D.N.Y. 1977); In re Wood, 430 F. Supp. 41 (S.D.N.Y. 1977), aff’d sub nom In re Cueto, 554 F.2d 14 (2d Cir. 1977). … Since indictments for unlawful flight are rarely sought, it would be improper to routinely use the grand jury in an effort to locate unlawful flight fugitives.

As the manual makes clear, the constraints on the power of the grand jury generally are determined by its purpose and function, but locating subjects for the benefit of law enforcement (rather than as a means of securing their testimony before the grand jury) is one of the few things so expressly and specifically excluded. Could this be what Wyden and Udall are obliquely referring to?

On a possibly related note, the Director of National Intelligence’s office sent Wyden and Udall a letter back in July rebuffing his request for information about the legal standard governing geolocation tracking by the intelligence community. While refusing to get into specifics, the letter explains that “there have been a diverse set of rulings concerning the quantum of evidence and the procedures required to obtain such evidence.” Now, a bit of common sense here: it is inconceivable that any judge on the secret court would not permit cell phone geolocation tracking of a target who was the subject of a full-blown FISA electronic surveillance warrant based on probable cause. There would be no “diversity” if the intelligence agencies were uniformly using only that procedure and that “quantum of evidence.” This claim only makes sense if the agencies have sought and, under some circumstances, obtained authorization to track cell phones pursuant to some other legal process requiring a lower evidentiary showing. (Again, you would not have “diversity” if the court had consistently responded to all such requests with: “No, get a warrant.”)

The options here are pretty limited, because the Foreign Intelligence Surveillance Act only provides for a few different kinds of orders to be issued by the FISC. There’s a full electronic surveillance warrant, requiring a probable cause showing that the target is an “agent of a foreign power.” There’s a warrant for physical search, with the same standard, which doesn’t seem likely to be relevant to geotracking. The only other real options are so-called “pen register” orders, which are used to obtain realtime communications metadata, and Section 215. Both require only that the information sought be “relevant” to an ongoing national security investigation. For pen registers, the applicant need only “certify” that this is the case, which leaves judges with little to do beyond rubber-stamping orders. Section 215 orders require a “statement of facts showing that there are reasonable grounds” to think the information sought is “relevant,” but the statute also provides that any records are automatically relevant if they pertain to a suspected “agent of a foreign power,” or to anyone “in contact with, or known to” such an agent, or to the “activities of a suspected agent of a foreign power who is the subject of [an] authorized investigation.” The only way there can logically be “a diverse set of rulings” about the “quantum of evidence and the procedures required” to conduct cell phone location tracking is if the secret court has, on at least some occasions, allowed it under one or both of those authorities. Perhaps ironically, then, this terse response is not far short of a confirmation.

In criminal investigations, as I noted in a previous post, the Justice Department normally seeks a full warrant in order to do highly accurate, 24-hour realtime location, though it is not clear they believe this is constitutionally required. With a court order for the production of records based on “specific and articulable facts,” they can get call records generally indicating the location of the nearest cell tower when a call was placed—a much less precise and intrusive form of tracking, but one that is increasingly revealing as providers store more data and install ever more cell towers. For realtime tracking that is less precise, they’ll often seek to bundle a records order with a pen register order, to create a “hybrid” tracking order. Judges are increasingly concluding that these standards do not adequately protect constitutional privacy interests, but you’d expect a”diverse set of rulings” if the FISC had adopted a roughly parallel set of rules—except, of course, that the standards for the equivalent orders on the intelligence side are a good deal more permissive. The bottom line, though, is that this makes it all but certain the intelligence agencies are secretly tracking people—and potentially large numbers of people—who it does not have probable cause to believe, and may not even suspect, are involved in terrorism or espionage. No wonder Wyden and Udall are concerned.

Moral Panic and Your Privacy

Want to understand a big chunk of what Washington, D.C. does? Learn about “moral panic.”

Moral panic is a dynamic in the political and media spheres in which some threat to social order—often something taboo—causes a response that goes far beyond meeting the actual threat. It’s a socio-political stampede, if you will. You might be surprised to learn how easily stampeded your society is.

Take a look at H.R. 1981, the Protecting Children from Internet Pornographers Act of 2011. It’s got everything: porn, children, the Internet. And it’s got everything: financial services providers dragooned into law enforcement, data retention requirements heaped on Internet service providers, expanded “administrative subpoena” authority. (Administrative subpoenas are an improvisation to accommodate the massive power of the bureaucracy, and they’ve become another end-run around the Fourth Amendment. If it’s “administrative” it must be reasonable, goes the non-thinking…)

This isn’t a bill about child predation. It’s a bald-faced attack on privacy and limited government. Congress can move legislation like this, even in the era of the Tea Party movement, because child predation is a taboo subject. The inference is too strong in too many minds that opposing government in-roads on privacy is somehow supporting child exploitation. Congress and its allies use taboos to cow the populace into accepting yet more government growth and yet more surveillance.

I’m not turned to mush by taboos, so the question I’m most interested in having asked at tomorrow’s hearing on the bill in the House Judiciary Committee is: “Under what theory of the Commerce Clause is this bill within the power of the federal government?”

FBI’s New Guidelines Further Loosen Constraints on Monitoring

The New York Times’s Charlie Savage reports that the FBI is preparing to release a new Domestic Investigations and Operations Guide (DIOG), further relaxing the rules governing the Bureau’s investigation of Americans who are not suspected of any wrongdoing.

This comes just three years after the last major revision of FBI manual, which empowered agents to employ a broad range of investigative techniques in exploratory “assessments” of citizens or domestic groups, even in the absence of allegations or evidence of wrongdoing, which are needed to open an “investigation.” The FBI assured Congress that it would conduct intensive training, and test agents to ensure that they understood the limits of the new authority—but the Inspector General found irregularities suggestive of widespread cheating on those tests.

Agents can already do quite a bit even without opening an “assessment”: They can consult the government’s own massive (and ever-growing) databases, or search the public Internet for “open source” intelligence. If, however, they want to start digging through state and local law enforcement records, or plumb the vast quantities of information held by commercial data aggregators like LexisNexis or Acxiom, they currently do have to open an assessment. Again, that doesn’t mean they’ve got to have evidence—or even an allegation—that their target is doing anything illegal, but it does mean they’ve got to create a paper trail and identify a legitimate purpose for their inquiries. That’s not much of a limitation, to be sure, but it does provide a strong deterrent to casual misuse of those databases for personal reasons. That paper trail means an agent who might be tempted to use government resources for personal ends—to check up on an ex or a new neighbor—has good reason to think twice.

Removing that check means there will be a lot more digging around in databases without any formal record of why. Even though most of those searches will be legitimate, that makes the abuses more likely to get lost in the crowd. Indeed, a series of reports by the Inspector General’s Office finding “widespread and serious misuse” of National Security Letters, noted that lax recordkeeping made it extremely difficult to accurately gauge the seriousness of the abuses or their true extent—and, of course, to hold the responsible parties accountable. Moreover, the most recent of those reports strongly suggests that agents engaged in illegal use of so-called “exigent letters” resisted the introduction of new records systems precisely because they knew (or at least suspected) their methods weren’t quite kosher.

The new rules will also permit agents to rifle through a person’s garbage when conducting an “assessment” of someone they’d like to recruit as an informant or mole. The reason, according to the Times, is that “they want the ability to use information found in a subject’s trash to put pressure on that person to assist the government in the investigation of others.” Not keen into being dragooned into FBI service? Hope you don’t have anything embarrassing in your dumpster! Physical surveillance squads can only be assigned to a target once, for a limited time, in the course of an assessment under the current rules—that limit, too, falls by the wayside in the revised DIOG.

The Bureau characterizes the latest round of changes as “tweaks” to the most recent revisions. That probably understates the significance of some of the changes, but one reason it’s worrying to see another bundle of revisions so soon after the last overhaul is precisely that it’s awfully easy to slip a big aggregate change under the radar by breaking it up into a series of “tweaks.”

We’ve seen such a move already with respect to National Security Letters, which enable access to a wide array of sensitive financial, phone, and Internet records without a court order—as long as the information is deemed relevant to an “authorized investigation.” When Congress massively expanded the scope of these tools under the USA Patriot Act, legislators understood that to mean full investigations, which must be based on “specific facts” suggesting that a crime is being committed or that a threat to national security exists. Just two years later, the Attorney General’s guidelines were quietly changed to permit the use of NSLs during “preliminary” investigations, which need not meet that standard. Soon, more than half of the NSLs issued each year were used for such preliminary inquiries (though they aren’t available for mere “assessments”… yet).

The FBI, of course, prefers to emphasize all the restrictions that remain in place. We’ll probably have to wait a year or two to see which of those get “tweaked” away next.