Tag: surveillance

Every Time I Say “Terrorism,” the Patriot Act Gets More Awesome

Can I send Time magazine the bill for the new crack in my desk and the splinters in my forehead? Because their latest excretion on the case of Colleen “Jihad Jane” LaRose and its relation to Patriot Act surveillance powers is absolutely maddening:

The Justice Department won’t say whether provisions of the Patriot Act were used to investigate and charge Colleen LaRose. But the FBI and U.S. prosecutors who charged the 46-year-old woman from Pennsburg, Pa., on Tuesday with conspiring with terrorists and pledging to commit murder in the name of jihad could well have used the Patriot Act’s fast access to her cell-phone records, hotel bills and rental-car contracts as they tracked her movements and contacts last year. But even if the law’s provisions weren’t directly used against her, the arrest of the woman who allegedly used the moniker “Jihad Jane” is a boost for the Patriot Act, Administration officials and Capitol Hill Democrats say. That’s because revelations of her alleged plot may give credibility to calls for even greater investigative powers for the FBI and law enforcement, including Republican proposals to expand certain surveillance techniques that are currently limited to targeting foreigners.

Sadly, this is practically a genre resorted to by lazy writers whenever a domestic terror investigation is making headlines. It consists of indulging in a lot of fuzzy speculation about how the Patriot Act might have been crucial—for all we know!—to a successful  investigation, even when every shred of available public evidence suggests otherwise.  My favorite exemplar of this genre comes from a Fox News piece penned by journalist-impersonator Cristina Corbin after the capture of some Brooklyn bomb plotters last spring, with the bold headline: “Patriot Act Likely Helped Thwart NYC Terror Plot, Security Experts Say.” The actual article contains nothing to justify the headline: It quotes some lawyers saying vague positive things about the Patriot Act, then tries to explain how the law expanded surveillance powers, but mostly botches the basic facts.  From what we know thanks to the work of real reporters,  the initial tip and the key evidence in that case came from a human infiltrator who steered the plotters to locations that had been physically bugged, not new Patriot tools.

Of course, it may well be that National Security Letters or other Patriot powers were invoked at some point in this investigation—the question is whether there’s any good reason to suspect they made an important difference. And that seems highly dubious. LaRose’s indictment cites the content of private communications, which probably would have been obtained using a boring old probable cause warrant—and the standard for that is far higher than for a traditional pen/trap order, which would have enabled them to be getting much faster access to more comprehensive cell records. Maybe earlier on, then, when they were compiling the evidence for those tools?  But as several reports on the investigation have noted, “Jihad Jane” was being tracked online by a groups of anti-jihadi amateurs some three years ago. As a member of one group writes sarcastically on the site Jawa Report, the “super sekrit” surveillance tool they used to keep abreast of LaRose’s increasingly disturbing activities was… Google. I’m going to go out on a limb and say the FBI could’ve handled this one with pre-Patriot authority, and a fortiori with Patriot authority restrained by some common-sense civil liberties safeguards.

What’s a little more unusual is to see this segue into the kind of argument we usually see in the wake of an intelligence failure, where the case is then seen as self-evidently justifying still more intrusive surveillance powers, in this case the expansion of the “lone wolf” authority currently applicable only to foreigners, allowing extraordinarily broad and secretive FISA surveillance to be conducted against people with no actual ties to a terror group or other “foreign power.” Yet as Time itself notes:

In fact, Justice Department terrorism experts are privately unimpressed by LaRose. Hers was not a particularly threatening plot, they say, and she was not using any of the more challenging counter-surveillance measures that more experienced jihadis, let alone foreign intelligence agents, use.

Which, of course, is a big part of the reason we have a separate system for dealing with agents of foreign powers: They are typically trained in counterintelligence tradecraft with access to resources and networks far beyond those of ordinary nuts. What possible support can LaRose’s case provide for the proposition that these industrial-strength tools should now be turned on American citizens?  They caught her—and without much trouble, by the looks of it. Sure, this domestic nut may have invoked to Islamist ideology rather than the commands of Sam the Dog or anti-Semitic conspiracy theories… but so what? She’s still one more moderately dangerous unhinged American in a country that has its fair share, and has been dealing with them pretty well under the auspices of Title III for a good while now.

Patriot Act Update

It looks as though we’ll be getting a straight one-year reauthorization of the expiring provisions of the Patriot Act, without even the minimal added safeguards for privacy and civil liberties that had been proposed in the Senate’s watered down bill.  This is disappointing, but was also eminently predictable: Between health care and the economy, it was clear Congress wasn’t going to make time for any real debate on substantive reform of surveillance law. Still, the fact that the reauthorization is only for one year suggests that the reformers plan to give it another go—though, in all probability, we won’t see any action on this until after the midterm elections.

The silver lining here is that this creates a bit of breathing room, and means legislators may now have a chance to take account of the absolutely damning Inspector General’s report that found that the FBI repeatedly and systematically broke the law by exceeding its authorization to gather information about people’s telecommunications activities. It also means the debate need not be contaminated by the panic over the Fort Hood shootings or the failed Christmas bombing—neither of which have anything whatever to do with the specific provisions at issue here, but both of which would have doubtless been invoked ad nauseam anyway.

Big Teacher Is Watching

Researching government invasions of privacy all day, I come across my fair share of incredibly creepy stories, but this one may just take the cake.  A lawsuit alleges that the Lower Merion School District in suburban Pennsylvania used laptops issued to each student to spy on the kids at home by remotely and surreptitiously activating the webcam built into the bezel of each one. The horrified parents of one student apparently learned about this capability when their son was called in to the assistant principal’s office and accused of “inappropriate behavior while at home.” The evidence? A still photograph taken by the laptop camera in the student’s home.

I’ll admit, at first I was somewhat skeptical—if only because this kind of spying is in such flagrant violation of so many statutes that I thought surely one of the dozens of people involved in setting it up would have piped up and said: “You know, we could all go to jail for this.” But then one of the commenters over at Boing Boing reminded me that I’d seen something like this before, in a clip from Frontline documentary about the use of technology in one Bronx school.  Scroll ahead to 4:37 and you’ll see a school administrator explain how he can monitor what the kids are up to on their laptops in class. When he sees students using the built-in Photo Booth software to check their hair instead of paying attention, he remotely triggers it to snap a picture, then laughs as the kids realize they’re under observation and scurry back to approved activities.

I’ll admit, when I first saw that documentary—it aired this past summer—that scene didn’t especially jump out at me. The kids were, after all, in class, where we expect them to be under the teacher’s watchful eye most of the time anyway. The now obvious question, of course, is: What prevents someone from activating precisely the same monitoring software when the kids take the laptops home, provided they’re still connected to the Internet?  Still more chilling: What use is being made of these capabilities by administrators who know better than to disclose their extracurricular surveillance to the students?  Are we confident that none of these schools employ anyone who might succumb to the temptation to check in on teenagers getting out of the shower in the morning? How would we ever know?

I dwell on this because it’s a powerful illustration of a more general point that can’t be made often enough about surveillance: Architecture is everything. The monitoring software on these laptops was installed with an arguably legitimate educational purpose, but once the architecture of surveillance is in place, abuse becomes practically inevitable.  Imagine that, instead of being allowed to install a bug in someone’s home after obtaining a warrant, the government placed bugs in all homes—promising to activate them only pursuant to a judicial order.  Even if we assume the promise were always kept and the system were unhackable—both wildly implausible suppositions—the amount of surveillance would surely spike, because the ease of resorting to it would be much greater even if the formal legal prerequisites remained the same. And, of course, the existence of the mics would have a psychological effect of making surveillance seem like a default.

You can see this effect in law enforcement demands for data retention laws, which would require Internet Service Providers to keep at least customer transactional logs for a period of years. In face-to-face interactions, of course, our default assumption is that no record at all exists of the great majority of our conversations. Law enforcement accepts this as a fact of nature. But with digital communication, the default is that just about every activity creates a record of some sort, and so police come to see it as outrageous that a potentially useful piece of evidence might be deleted.

Unfortunately, we tend to discuss surveillance in myopically narrow terms.  Should the government be able to listen in on the phone conversations of known terrorists? To pose the question is to answer it. What kind of technological architecture is required to reliably sweep up all the communications an intelligence agency might want—for perfectly legitimate reasons—and what kind of institutional incentives and inertia does that architecture create? A far more complicated question—and one likely to seem too abstract to bother about for legislators focused on the threat of the week.

Surveillance, Security, and the Google Breach

Yesterday’s bombshell announcement that Google is prepared to pull out of China rather than continuing to cooperate with government Web censorship was precipitated by a series of attacks on Google servers seeking information about the accounts of Chinese dissidents.  One thing that leaped out at me from the announcement was the claim that the breach “was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.” That piqued my interest because it’s precisely the kind of information that law enforcement is able to obtain via court order, and I was hard-pressed to think of other reasons they’d have segregated access to user account and header information.  And as Macworld reports, that’s precisely where the attackers got in:

That’s because they apparently were able to access a system used to help Google comply with search warrants by providing data on Google users, said a source familiar with the situation, who spoke on condition of anonymity because he was not authorized to speak with the press.

This is hardly the first time telecom surveillance architecture designed for law enforcement use has been exploited by hackers. In 2005, it was discovered that Greece’s largest cellular network had been compromised by an outside adversary. Software intended to facilitate legal wiretaps had been switched on and hijacked by an unknown attacker, who used it to spy on the conversations of over 100 Greek VIPs, including the prime minister.

As an eminent group of security experts argued in 2008, the trend toward building surveillance capability into telecommunications architecture amounts to a breach-by-design, and a serious security risk. As the volume of requests from law enforcement at all levels grows, the compliance burdens on telcoms grow also—making it increasingly tempting to create automated portals to permit access to user information with minimal human intervention.

The problem of volume is front and center in a leaked recording released last month, in which Sprint’s head of legal compliance revealed that their automated system had processed 8 million requests for GPS location data in the span of a year, noting that it would have been impossible to manually serve that level of law enforcement traffic.  Less remarked on, though, was Taylor’s speculation that someone who downloaded a phony warrant form and submitted it to a random telecom would have a good chance of getting a response—and one assumes he’d know if anyone would.

The irony here is that, while we’re accustomed to talking about the tension between privacy and security—to the point where it sometimes seems like people think greater invasion of privacy ipso facto yields greater security—one of the most serious and least discussed problems with built-in surveillance is the security risk it creates.

Colbert Report on PATRIOT & Private Spying

Stephen Colbert tackles both Obama’s flip-flop on the PATRIOT Act (“When presidents take office they learn a secret… Unlimited power is awesome!”) and the private sector’s complicity in the growth of the surveillance state—drawing heavily on the invaluable work of Chris Soghoian.

The Colbert Report Mon - Thurs 11:30pm / 10:30c
The Word - Spyvate Sector
Colbert Report Full Episodes Political Humor U.S. Speedskating

The Virtual Fourth Amendment

I’ve just gotten around to reading Orin Kerr’s fine paper “Applying the Fourth Amendment to the Internet: A General Approach.”  Like most everything he writes on the topic of technology and privacy, it is thoughtful and worth reading.  Here, from the abstract, are the main conclusions:

First, the traditional physical distinction between inside and outside should be replaced with the online distinction between content and non-content information. Second, courts should require a search warrant that is particularized to individuals rather than Internet accounts to collect the contents of protected Internet communications. These two principles point the way to a technology-neutral translation of the Fourth Amendment from physical space to cyberspace.

I’ll let folks read the full arguments to these conclusions in Orin’s own words, but I want to suggest a clarification and a tentative objection.  The clarification is that, while I think the right level of particularity is, broadly speaking, the person rather than the account, search warrants should have to specify in advance either the accounts covered (a list of e-mail addresses) or the method of determining which accounts are covered (“such accounts as the ISP identifies as belonging to the target,” for instance).  Since there’s often substantial uncertainty about who is actually behind a particular online identity, the discretion of the investigator in making that link should be constrained to the maximum practicable extent.

The objection is that there’s an important ambiguity in the physical-space “inside/outside” distinction, and how one interprets it matters a great deal for what the online content/non-content distinction amounts to. The crux of it is this: Several cases suggest that surveillance conducted “outside” a protected space can nevertheless be surveillance of the “inside” of that space. The grandaddy in this line is, of course, Katz v. United States, which held that wiretaps and listening devices may constitute a “search” though they do not involve physical intrusion on private property. Kerr can accomodate this by noting that while this is surveillance “outside” physical space, it captures the “inside” of communication contents. But a greater difficulty is presented by another important case, Kyllo v. United States, with which Kerr deals rather too cursorily.

In Kyllo, the majority—led, perhaps surprisingly, by Justice Scalia!—found that the use without a warrant of a thermal imaging scanner to detect the use of marijuana growing lights in a private residence violated the Fourth Amendment. As Kerr observes, the crux of the disagreement between the majority and the dissent had to do with whether the scanner should be considered to be gathering private information about the interior of the house, or whether it only gathered information (about the relative warmth of certain areas of the house) that might have been obtained by ordinary observation from the exterior of the house.  No great theoretical problem, says Kerr: That only shows that the inside/outside line will sometimes be difficult to draw in novel circumstances. Online, for instance, we may be unsure whether to regard the URL of a specific Web page as mere “addressing” information or as “content”—first, because it typically makes it trivial to learn the content of what a user has read, and second, because URLs often contain the search terms manually entered by users. A similar issue arose with e-mail subject lines, which now seem by general consensus to be regarded as “content” even though they are transmitted in the “header” of an e-mail.

Focus on this familiar (if thorny) line drawing problem, however, misses what is important about the Kyllo case, and the larger problem it presents for Kerr’s dichotomy: Both the majority and the dissent seemed to agree that a more sophisticated scanner capable of detecting, say, the movements of persons within the house, would have constituted a Fourth Amendment search. But reflect, for a moment, on what this means given the way thermal imaging scanners operate. Infrared radiation emitted by objects within the house unambiguously ends up “outside” the house: A person standing on the public street cannot help but absorb some of it. What all the justices appeared to agree on, then, is that the collection and processing of information that is unambiguously outside the house, and is conducted entirely outside the house, may nevertheless amount to a search because it is surveillance of and yields information about the inside of the house. This means that there is a distinction between the space where information is acquired and the space about which it is acquired.

This matters for Kerr’s proposed content/non-content distinction, because in very much the same way, sophisticated measurement and analysis of non-content information may well yield information about content. A few examples may help to make this clear. Secure Shell (SSH) is an encrypted protocol for secure communications. In its interactive mode, SSH transmits each keystroke as a distinct packet—and this packet transmission information is non-content information of the sort that might be obtained, say, using a so-called pen/trap order, issued using a standard of mere “relevance” to an investigation, rather than the “probable cause” required for a full Fourth Amendment search—the same standard Kerr agrees should apply to communications. Yet there are strong and regular patterns in the way human beings type different words on a standard keyboard, such that the content of what is typed—under SSH or any realtime chat protocol that transmits each keystroke as a packet—may be deducible from the non-content packet transmission data given sufficiently advanced analytic algorithms. The analogy to the measurement and analysis of infrared radiation in Kyllo is, I think, quite strong.

It is not hard to come up with a plethora of similar examples. By federal statute, records of the movies a person rents enjoy substantial privacy protection, and the standard for law enforcement to obtain them—probable cause showing of “relevance” and prior notice to the consumer—is higher than required for a mere pen/trap. Yet precise analysis of the size of a file transmitted from a service like Netflix or iTunes could easily reveal either the specific movie or program downloaded, or at the least narrow it down to a reasonably small field of possibilities. Logs of the content-sensitive advertising served by a service like Gmail to a particular user may reveal general information about the contents of user e-mails. Sophisticated social network analysis based on calling or e-mailing patterns of multiple users may reveal, not specific communications contents, but information about the membership and internal structure of various groups and organizations. That amounts to revealing the “contents” of group membership lists, which could have profound First Amendment implications in light of a string of Supreme Court precedents making it clear that state compelled disclosure of such lists may impermissibly burden the freedom of expressive association even when it does not run afoul of Fourth Amendment privacy protections. And running back to Kyllo, especially as “smart” appliances and ubiquitous networked computing become more pervasive, analysis of non-content network traffic may reveal enormous amounts of information about the movements and activities of people within private homes.

Here’s one way to describe the problem here: The combination of digital technology and increasingly sophisticated analytic methods have complicated the intuitive link between what is directly observed or acquired and what is ultimately subject to surveillance in a broader sense. The natural move here is to try to draw a distinction between what is directly “acquired” and what is learned by mere “inference” from the information acquired. I doubt such a distinction will hold up. It takes a lot of sophisticated processing to turn ambient infrared radiation into an image of the interior of a home; the majority in Kyllo was not sympathetic to the argument that this was mere “inference.” Strictly speaking, after all, the data pulled off an Internet connection is nothing but a string of ones and zeroes. It is only a certain kind of processing that renders it as the text of an e-mail or an IM transcript. If a different sort of processing can derive the same transcript—or at least a fair chunk of it—from the string of ones and zeroes representing packet transmission timing, should we presume there’s a deep constitutional difference?

That is not to say there’s anything wrong with Kerr’s underyling intuition.  But it does, I think, suggest that new technologies will increasingly demand that privacy analysis not merely look at what is acquired but at what is done with it. In a way, the law’s hyperfocus on the moment of acquisition as the unique locus of Fourth Amendment blessing or damnation is the shadow of the myopically property-centric jurisprudence the Court finally found to be inadequate in Katz. As Kerr intimates in his paper, shaking off the digital echoes of that legacy—with its convenient bright lines—is apt to make things fiendishly complex, at least in the initial stages.  But I doubt it can be avoided much longer.