Tag: censorship

Remember, the FCC Is Our National Censor

Amid charge and countercharge about who is shilling for whom in the debate over Internet regulation, Peter Suderman has the right focus in a short piece on Reason’s Hit & Run blog. The Federal Communications Commission’s Chairman is claiming that he only wants to regulate the Internet’s infrastructure, but one of his colleagues, Commissioner Michael Copps, is non-denying that he wants to censor the Internet.

There may be exceptions, but it’s usually pretty safe to assume that anytime a politician or bureaucrat dodges a question while calling for “a national discussion about” the proposal at hand, what he or she really means is, “I want to indicate that I support this idea without actually going on record as supporting it.”

The FCC does censorship. It’s unfortunate to see willful disregard of this by the folks wanting to install the FCC as the Internet’s regulator.

Socialists Shouldn’t Have to Admit Libertarians Into Their Club

Hastings College of the Law, a public law school in California, has a policy prohibiting discrimination on the basis of “race, color, religion, national origin, ancestry, disabilities, age, sex or sexual orientation.” In 2004, the Christian Legal Society, a religious student organization at the school, applied to become a “recognized student organization” – a designation that would have allowed CLS to receive a variety of benefits afforded to about 60 other Hastings groups. While all are welcome to attend CLS meetings, CLS’s charter requires that its officers and voting members abide by key tenets of the Christian faith and comport themselves in ways consistent with its fundamental mission, which includes a prohibition on “unrepentant” sexual conduct outside of marriage between one man and one woman.

Hastings denied CLS registration on the asserted ground that this charter conflicts with the school’s nondiscrimination policy. CLS sued Hastings, asking for no different treatment than is given to any registered student group. The district court granted Hastings summary judgment and the Ninth Circuit affirmed. The Supreme Court granted certiorari to determine whether Hastings’s refusal to grant CLS access to student organization benefits amounted to viewpoint discrimination, which is impermissible under the First Amendment.

Yesterday Cato filed an amicus brief supporting CLS – authored by preeminent legal scholar Richard Epstein – in which we argue that CLS’s right to intimate and expressive association trump any purported state interest in enforcing a school nondiscrimination policy. While Hastings may impose reasonable restrictions on access to limited public forums, it should not be allowed to admit speakers with one point of view while excluding speakers who hold different views. Our brief also discredits Hastings’s assertion that its ability to exclude the public at large from school premises renders their content-based speech restrictions constitutional.

We urge the Court to safeguard public university students’ right to form groups – which by definition exclude people – free from government interference or censorship.  (Of course, our first choice would be for the government to get out of the university business and our second choice would be to stop forcing taxpayers to pay for student clubs, but given those two realities – as in the case at hand – freedom of association is the way to go.)

Surveillance, Security, and the Google Breach

Yesterday’s bombshell announcement that Google is prepared to pull out of China rather than continuing to cooperate with government Web censorship was precipitated by a series of attacks on Google servers seeking information about the accounts of Chinese dissidents.  One thing that leaped out at me from the announcement was the claim that the breach “was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.” That piqued my interest because it’s precisely the kind of information that law enforcement is able to obtain via court order, and I was hard-pressed to think of other reasons they’d have segregated access to user account and header information.  And as Macworld reports, that’s precisely where the attackers got in:

That’s because they apparently were able to access a system used to help Google comply with search warrants by providing data on Google users, said a source familiar with the situation, who spoke on condition of anonymity because he was not authorized to speak with the press.

This is hardly the first time telecom surveillance architecture designed for law enforcement use has been exploited by hackers. In 2005, it was discovered that Greece’s largest cellular network had been compromised by an outside adversary. Software intended to facilitate legal wiretaps had been switched on and hijacked by an unknown attacker, who used it to spy on the conversations of over 100 Greek VIPs, including the prime minister.

As an eminent group of security experts argued in 2008, the trend toward building surveillance capability into telecommunications architecture amounts to a breach-by-design, and a serious security risk. As the volume of requests from law enforcement at all levels grows, the compliance burdens on telcoms grow also—making it increasingly tempting to create automated portals to permit access to user information with minimal human intervention.

The problem of volume is front and center in a leaked recording released last month, in which Sprint’s head of legal compliance revealed that their automated system had processed 8 million requests for GPS location data in the span of a year, noting that it would have been impossible to manually serve that level of law enforcement traffic.  Less remarked on, though, was Taylor’s speculation that someone who downloaded a phony warrant form and submitted it to a random telecom would have a good chance of getting a response—and one assumes he’d know if anyone would.

The irony here is that, while we’re accustomed to talking about the tension between privacy and security—to the point where it sometimes seems like people think greater invasion of privacy ipso facto yields greater security—one of the most serious and least discussed problems with built-in surveillance is the security risk it creates.

Mistaken Moral Equivalency

Former Google executive turned Obama administration deputy chief technology officer Andrew McLaughlin made some unfortunate comments at a law school technology conference last week equating private network management to government censorship as it is practiced in China.

By many accounts, President Obama’s visit to China was unimpressive. It apparently included a press conference at which no questions were allowed and government censorship of the president’s anti-censorship comments. On its heels, McLaughlin equated Chinese government censorship with network management by U.S. Internet service providers.

“If it bothers you that the China government does it, it should bother you when your cable company does it,” McLaughlin said. That line is wrong on at least two counts.

First, your cable company doesn’t do it. There have been two cases in which ISPs interfered with traffic in ways that are generally regarded as wrongful.  Comcast slowed down BitTorrent file sharing traffic in some places for a period of time, did a poor job of disclosing it, and relented when the practice came to light. (People who don’t know the facts will argue that the FCC stepped in, but market pressures had solved the problem before the FCC did anything.) The second was a 2005 case in which a North Carolina phone company/ISP called Madison River Communications allegedly blocked Vonage VoIP traffic.

In neither of these anecdotes did the ISP degrade Internet traffic because of its content—because of the information any person was trying to communicate to another. Comcast was trying to make sure that its customers could get access to the Internet despite some bandwidth hogs on its network. Madison River was apparently trying to keep people using its telephone lines rather than making Internet phone calls. That’s a market no-no, but not censorship.

Second, if the latter were happening, Chinese government censorship and corporate censorship would have no moral equivalency. In a free country, the manager of a private network can say to customers, “You may not transmit certain messages over our network.” People who don’t like that contract term can go to other networks, and they surely would. (Tim Lee’s paper, The Durable Internet: Preserving Network Neutrality Without Regulation, shows that ownership of networks and platforms does not equate to control of their content.)

When the government of China forces networks and platforms to remove content that it doesn’t like, that demand comes ultimately from the end of a gun. Governments like China’s imprison and kill their people for expressing disfavored views and for organizing to live freer lives. This has no relationship to cable companies’ network management practices, even when these ISPs deviate from consumer demand.

McLaughlin is a professional colleague who has my esteem. I defended Google’s involvement in the Chinese market during his tenure there. But if he lacks grounding in the fundamentals of freedom—thinking that private U.S. ISPs and the Chinese government are part of some undifferentiated mass of authority—I relish the chance to differ with him.

Some Thoughts on the New Surveillance

Last night I spoke at “The Little Idea,” a mini-lecture series launched in New York by Ari Melber of The Nation and now starting up here in D.C., on the incredibly civilized premise that, instead of some interminable panel that culminates in a series of audience monologues-disguised-as-questions, it’s much more appealing to have a speaker give a ten-minute spiel, sort of as a prompt for discussion, and then chat with the crowd over drinks.

I’d sketched out a rather longer version of my remarks in advance just to make sure I had my main ideas clear, and so I’ll post them here, as a sort of preview of a rather longer and more formal paper on 21st century surveillance and privacy that I’m working on. Since ten-minute talks don’t accommodate footnotes very well, I should note that I’m drawing for a lot of these ideas on the excellent work of legal scholars Lawrence Lessig and Daniel Solove (relevant papers at the links). Anyway, the expanded version of my talk after the jump:

Since this is supposed to be an event where the drinking is at least as important as the talking, I want to begin with a story about booze—the story of a guy named Roy Olmstead.  Back in the days of Prohibition, Roy Olmstead was the youngest lieutenant on the Seattle police force. He spent a lot of his time busting liquor bootleggers, and in the course of his duties, he had two epiphanies. First, the local rum runners were disorganized—they needed a smart kingpin who’d run the operation like a business. Second, and more importantly, he realized liquor smuggling paid a lot better than police work.

So Roy Olmstead decided to change careers, and it turned out he was a natural. Within a few years he had remarried to a British debutante, bought a big white mansion, and even ran his own radio station—which he used to signal his ships, smuggling hooch down from Canada, via coded messages hidden in broadcasts of children’s bedtime stories. He did retain enough of his old ethos, though, that he forbade his men from carrying guns. The local press called him the Bootleg King of Puget Sound, and his parties were the hottest ticket in town.

Roy’s success did not go unnoticed, of course, and soon enough the feds were after him using their own clever high-tech method: wiretapping. It was so new that they didn’t think they needed to get a court warrant to listen in on phone conversations, and so when the hammer came down, Roy Olmstead challenged those wiretaps in a case that went all the way to the Supreme Court—Olmstead v. U.S.

The court had to decide whether these warrantless wiretaps had violated the Fourth Amendment “right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures.” But when the court looked at how a “search” had traditionally been defined, they saw that it was tied to the common law tort of trespass. Originally, that was supposed to be your remedy if you thought your rights had been violated, and a warrant was a kind of shield against a trespass lawsuit. So the majority didn’t see any problem: “There was no search,” they wrote, “there was no seizure.” Because a search was when the cops came on to your property, and a seizure was when they took your stuff. This was no more a search than if the police had walked by on the sidewalk and seen Roy unpacking a crate of whiskey through his living room window: It was just another kind of non-invasive observation.

So Olmstead went to jail, and came out a dedicated evangelist for Christian Science. It wasn’t until the year after Olmstead died, in 1967, that the Court finally changed its mind in a case called Katz v. U.S.: No, they said, the Fourth Amendment protects people and not places, and so instead of looking at property we’re going to look at your reasonable expectation of privacy, and on that understanding, wiretaps are a problem after all.

So that’s a little history lesson—great, so what? Well, we’re having our own debate about surveillance as Congress considers not just reauthorization of some expiring Patriot Act powers, but also reform of the larger post-9/11 surveillance state, including last year’s incredibly broad amendments to the Foreign Intelligence Surveillance Act. And I see legislators and pundits repeating two related types of mistakes—and these are really conceptual mistakes, not legal mistakes—that we can now, with the benefit of hindsight, more easily recognize in the logic of Olmstead: One is a mistake about technology; the other is a mistake about the value of privacy.

First, the technology mistake. The property rule they used in Olmstead was founded on an assumption about the technological constraints on observation. The goal of the Fourth Amendment was to preserve a certain kind of balance between individual autonomy and state power. The mechanism for achieving that goal was a rule that established a particular trigger or tripwire that would, in a sense, activate the courts when that boundary was crossed in order to maintain the balance. Establishing trespass as the trigger made sense when the sphere of intimate communication was coextensive with the boundaries of your private property. But when technology decoupled those two things, keeping the rule the same no longer preserved the balance, the underlying goal, in the same way, because suddenly you could gather information that once required trespass without hitting that property tripwire.

The second and less obvious error has to do with a conception of the value of privacy, and a corresponding idea of what a privacy harm looks like.  You could call the Olmstead court’s theory “Privacy as Seclusion,” where the paradigmatic violation is the jackboot busting down your door and disturbing the peace of your home. Wiretapping didn’t look like that, and so in one sense it was less intrusive—invisible, even. In another sense, it was more intrusive because it was invisible: Police could listen to your private conversations for months at a time, with you none the wiser. The Katz court finally understood this; you could call their theory Privacy as Secrecy, where the harm is not intrusion but disclosure.

But there’s an even less obvious potential harm here. If they didn’t need a warrant, everyone who made a phone call would know that they could whenever they felt like it. Wiretapping is expensive and labor intensive enough that realistically they can only be gathering information about a few people at a time.  But if further technological change were to remove that constraint, then the knowledge of the permanent possibility of surveillance starts having subtle effects on people’s behavior—if you’ve seen the movie The Lives of Others you can see an extreme case of an ecology of constant suspicion—and that persists whether or not you’re actually under surveillance.  To put it in terms familiar to Washingtonians: Imagine if your conversations had to be “on the record” all the time. Borrowing from Michel Foucault, we can say the privacy harm here is not (primarily) invasion or disclosure but discipline. This idea is even embedded in our language: When we say we want to control and discipline these police powers, we talk about the need for over-sight and super-vision, which are etymologically basically the same word as sur-veillance.

Move one more level from the individual and concrete to the abstract and social harms, and you’ve got the problem (or at least the mixed blessing) of what I’ll call legibility. The idea here is that the longer term possibilities of state control—the kinds of power that are even conceivable—are determined in the modern world by the kind and quantity of information the modern state has, not about discrete individuals, but about populations.  So again, to reach back a few decades, the idea that maybe it would be convenient to round up all the Americans of Japanese ancestry—or some other group—and put them in internment camps is just not even on the conceptual menu unless you have a preexisting informational capacity to rapidly filter and locate your population that way.

Now, when we talk about our First Amendment right to free speech, we understand it has a certain dual character: That there’s an individual right grounded in the equal dignity of free citizens that’s violated whenever I’m prohibited from expressing my views. But also a common or collective good that is an important structural precondition of democracy. As a citizen subject to democratic laws, I have a vested interest in the freedom of political discourse whether or not I personally want to say–or even listen to–controversial speech. Looking at the incredible scope of documented intelligence abuses from the 60s and 70s, we can add that I have an interest in knowing whether government officials are trying to silence or intimidate inconvenient journalists, activists, or even legislators. Censorship and arrest are blunt tactics I can see and protest; blackmail or a calculated leak that brings public disgrace are not so obvious. As legal scholar Bill Stuntz has argued, the Founders understood the structural value of the Fourth Amendment as a complement to the First, because it is very hard to make it a crime to pray the wrong way or to discuss radical politics if the police can’t arbitrarily see what people are doing or writing in their homes.

Now consider how we think about our own contemporary innovations in search technology. The marketing copy claims PATRIOT and its offspring “update” investigative powers for the information age—but what we’re trying to do is stretch our traditional rules and oversight mechanisms to accommodate search tools as radically novel now as wiretapping was in the 20s. On the traditional model, you want information about a target’s communications and conduct, so you ask a judge to approve a method of surveillance, using standards that depend on how intrusive the method is and how secret and sensitive the information is. Constrained by legal rulings from a very different technological environment, this model assumes that information held by third parties—like your phone or banking or credit card information—gets very little protection, since it’s not really “secret” anymore. And the sensitivity of all that information is evaluated in isolation, not in terms of the story that might emerge from linking together all the traces we now inevitable leave in the datasphere every day.

The new surveillance typically seeks to observe information about conduct and communications in order to identify targets. That may mean using voiceprint analysis to pull matches for a particular target’s voice or a sufficiently unusual regional dialect in a certain area. It may mean content analysis to flag e-mails or voice conversations containing known terrorist code phrases. It may mean social graph analysis to reidentify targets who have changed venues by their calling patterns.  If you’re on Facebook, and a you and bunch of your friends all decide to use fake names when you sign up for Twitter, I can still reidentify you given sufficient computing power and strong algorithms by mapping the shape of the connections between you—a kind of social fingerprinting. It can involve predictive analysis based on powerful electronic “classifiers” that extract subtle patterns of travel or communication or purchases common to past terrorists in order to write their own algorithms for detecting potential ones.

Bracket for the moment whether we think some or all of these methods are wise.  It should be crystal clear that a method of oversight designed for up front review and authorization of target-based surveillance is going to be totally inadequate as a safeguard for these new methods.  It will either forbid them completely or be absent from the parts of the process where the dangers to privacy exist. In practice what we’ve done is shift the burden of privacy protection to so-called “minimization” procedures that are meant to archive or at least anonymize data about innocent people. But those procedures have themselves been rendered obsolete by technologies of retrieval and reidentification: No sufficiently large data set is truly anonymous.

And realize the size of the data sets we’re talking about. The FBI’s Information Data Warehouse holds at least 1.5 billion records, and growing fast, from an array of private and government sector sources—some presumably obtained using National Security Letters and Patriot 215 orders, some by other means. Those NSLs are issued by the tens of thousands each year, mostly for information about Americans.  As of 2006, we know “some intelligence sources”—probably NSA’s—were  growing at a rate of 4 petabytes, that’s 4 million Gigabytes—each month.  Within about five years, NSA’s archive is expected to be measured in Yottabytes—if you want to picture one Yottabyte, take the sum total of all data on the Internet—every web page, audio file, and video—and multiply it by 2,000. At that point they will have to make up a new word for the next largest unit of data.  As J. Edgar Hoover understood all too well, just having that information is a form of power. He wasn’t the most feared man in Washington for decades because he necessarily had something on everyone—though he had a lot—but because he had so much that you really couldn’t be sure what he had on you.

There is, to be sure, a lot to be said against the expansion of surveillance powers over the past eight years from a more conventional civil liberties perspective.  But we also need to be aware that if we’re not attuned to the way new technologies may avoid our would tripwires, if we only think of privacy in terms of certain familiar, paradigmatic violations—the boot in the door—then like the Olmstead court, we may render ourselves blind to equally serious threats that don’t fit our mental picture of a privacy harm.

If we’re going to avoid this, we need to attune ourselves to the ways modern surveillance is qualitatively different from past search tools, even if words like “wiretap” and “subpoena” remain the same. And we’re going to need to stop thinking only in terms of isolated violations of individual rights, but also consider the systemic and structural effects of the architectures of surveillance we’re constructing.

Topics:

Cooperating against the Censors

One of the consequences of governments attempting to crack down on dissent is increasing cooperation among groups in different countries pushing for greater liberty and human rights.  For instance, some of the most important aid for Iranian protesters is coming from Chinese dissidents.

Reports Nicholas Kristof in the New York Times:

The unrest unfolding in Iran is the quintessential 21st-century conflict. On one side are government thugs firing bullets. On the other side are young protesters firing “tweets.”

The protesters’ arsenal, such as those tweets on Twitter.com, depends on the Internet or other communications channels. So the Iranian government is blocking certain Web sites and evicting foreign reporters or keeping them away from the action.

The push to remove witnesses may be the prelude to a Tehran Tiananmen. Yet a secret Internet lifeline remains, and it’s a tribute to the crazy, globalized world we live in. The lifeline was designed by Chinese computer engineers in America to evade Communist Party censorship of a repressed Chinese spiritual group, the Falun Gong.

Today, it is these Chinese supporters of Falun Gong who are the best hope for Iranians trying to reach blocked sites.

“We don’t have the heart to cut off the Iranians,” said Shiyu Zhou, a computer scientist and leader in the Chinese effort, called the Global Internet Freedom Consortium. “But if our servers overload too much, we may have to cut down the traffic.”

Unfortunately, the struggle against government repression remains a difficult one.  But the development of a global human rights community with members willing to help each other wherever they are is an extremely positive sign.