Tag: privacy

Should Legislatures, Commissions, and Such Figure Out Privacy Problems?

The recent European Commission proposal to create a radical and likely near impossible-to-implement “right to be forgotten” provides an opportunity to do some thinking about how privacy norms should be established.

In 1961, Italian liberal philosopher and lawyer Bruno Leoni published Freedom and the Law, an excellent, if dense, rumination on law and legislation, which, as he emphasized, are quite different things.

Legislation appears today to be a quick, rational, and far-reaching remedy against every kind of evil or inconvenience, as compared with, say, judicial decisions, the settlement of disputes by private arbiters, conventions, customs, and similar kinds of spontaneous adjustments on the part of individuals. A fact that almost always goes unnoticed is that a remedy by way of legislation may be too quick to be efficacious, too unpredictably far-reaching to be wholly beneficial, and too directly connected with the contingent views and interests of a handful of people (the legislators), whoever they may be, to be, in fact, a remedy for all concerned. Even when all this is noticed, the criticism is usually directed against particular statutes rather than against legislation as such, and a new remedy is always looked for in “better” statutes instead of in something altogether different from legislation. (page 7, 1991 Liberty Fund edition)

The new Commission proposal is an example. Apparently the EU’s 1995 Data Protection Directive didn’t do it.

Rather than some central authority, it is in vernacular practice that we should discover the appropriate “common” law, emphasizes Leoni.

“[A] legal system centered on legislation resembles … a centralized economy in which all the relevant decisions are made by a handful of directors, whose knowledge of the whole situation is fatally limited and whose respect, if any, for the people’s wishes is subject to that limitation. No solemn titles, no pompous ceremonies, no enthusiasm on the part of the applauding masses can conceal the crude fact that both the legislators and the directors of a centralized economy are only particular individuals like you and me, ignorant of 99 percent of what is going on around them as far as the real transactions, agreements, attitudes, feelings, and convictions of people are concerned. (page 22-23, emphasis removed)

The proposed “right to be forgotten” is a soaring flight of fancy, produced by detached intellects who lack the knowledge to devise appropriate privacy norms. If it were to move forward as is, it would cripple Europe’s information economy while hamstringing international data flows. More importantly, it would deny European consumers the benefits of a modernizing economy by giving them more privacy than they probably want.

I say “probably” because I don’t know what European consumers want. I only know how to learn what they want—and that is not by observing the dictates of the people who occupy Europe’s many government bureaucracies.

Privacy and the Common Good

Jim Harper’s post Monday, responding to communitarian Amitai Etzioni on “strip search” scanners at airports, gives me an opportunity to mount one of my hobbyhorses.

My beef with Etzioni’s conclusory argument isn’t just that, as Jim observes, he purports to “weigh” the individual right to privacy against the common good (here in the guise of “security”) without any real analysis of the magnitudes on both sides. It’s that his framing is fundamentally backwards. The importance of privacy is, to a great extent, a function of its collective dimension—a point to which you’d think a communitarian theorist who’s written an entire book on privacy would be more keenly attuned. If I may indulge in a little self-quotation:

[W]hen we talk about our First Amendment right to free speech, we understand it has a certain dual character: That there’s an individual right grounded in the equal dignity of free citizens that’s violated whenever I’m prohibited from expressing my views. But also a common or collective good that is an important structural precondition of democracy. As a citizen subject to democratic laws, I have a vested interest in the freedom of political discourse whether or not I personally want to [engage in]–or even listen to–controversial speech. Looking at the incredible scope of documented intelligence abuses from the ’60s and ’70s, we can add that I have an interest in knowing whether government officials are trying to silence or intimidate inconvenient journalists, activists, or even legislators. Censorship and arrest are blunt tactics I can see and protest; blackmail or a calculated leak that brings public disgrace are not so obvious. As legal scholar Bill Stuntz has argued, the Founders understood the structural value of the Fourth Amendment as a complement to the First, because it is very hard to make it a crime to pray the wrong way or to discuss radical politics if the police can’t arbitrarily see what people are doing or writing in their homes.

I’m actually somewhat sympathetic to the notion that the individual harms that result from strip scanners are relatively slight, especially when passengers can opt for a pat down instead. In the worst case scenario, some unscrupulous TSA employee might find a way to save and circulate some of these blurry quasi-nude images, the embarrassment potential of which is likely to be mitigated by the fact that the x-ray view doesn’t really show an identifiable face.

I’m much more concerned about the social effect of making such machines commonplace—of creating a general norm that people who wish to engage in routine travel must expect to expose themselves in this way. As Michel Foucault famously observed, surveillance is not merely the passive gathering of information; it exerts a “disciplinary” power, creating what he called “docile bodies.” The airport becomes a schoolhouse whose lesson is that not even the most intimate spaces escape the gaze of authority.

In his fine book The Naked Crowd, legal scholar Jeff Rosen recounts presenting his students and other audiences with a hypothetical choice between going through a strip scanner and a “Blob Machine”—a similar scanner programmed to filter out the passenger’s body image and project any foreign objects (as determined by density) on a generic wireframe mannequin. Though he assured them that the Blob Machine was just as accurate at detecting hidden objects, he found that in every group some significant number of people still preferred to subject themselves to the strip-scanner, in what Rosen calls “a ritualistic demonstration of their own purity and trustworthiness.” But there may be more to it than that. To expose oneself, render oneself vulnerable, is also closely linked to rituals of subordination—not just in human cultures, but in the animal kingdom. Think of the pack dog signaling his recognition of the alpha male’s (or owner’s) dominance by rolling over to expose his belly. In the context of pervasive fear of terrorism, this kind of routine exposure is a way of reassuring ourselves of the power of our protectors, quite apart from whatever immediate utility the strip-scanners have as a detection and deterrence mechanism. We ought to be a little wary of any “security” measures that seem to feed into that psychological mechanism.

While I don’t think these sorts of considerations ought to be dispositive by themselves in particular circumstances where a security measure is otherwise justifiable in more conventional cost-benefit terms, I think a communitarian commentator in particular ought to be a lot more sensitive to the cumulative cultural effect of many such measures. Formal institutions and rules are important to the preservation of free societies, but so are background norms and expectations. A society that comes to accept as normal the routine observation of our naked bodies by authority as an incident to travel is, I think, in danger of losing some important cultural capital.

The ‘Communitarian’ Defense of Strip-Search Machines

What’s most interesting about Amitai Etzioni’s defense of airport strip-search machines is how rootless his approach to privacy problems is.

[O]ur public-policy decisions must balance two core values: Our commitment to individual rights and our commitment to the common good. Neither is a priori privileged. Thus, when threatened by the lethal SARS virus, we demanded that contagious people stay home—even though this limited their freedom to assemble and travel—because the contribution to the common good was high and the intrusion limited. Yet we banned the trading of medical records because these trades constituted a severe intrusion, but had no socially redeeming merit.

I disagree with this formulation, and I don’t know that he has accurately depicted the law on ”trade” in medical records or the merits on that question. But more important here: these value-balancing precedents don’t guide his analysis of strip-search machines. Rather, he just concludes in favor of them using his own assessment of “the common good.”

At least Etzioni is consistent. I wrote in my 2005 Privacilla.org review of his book, The Limits of Privacy: “[T]he book amounts to little more than bare assertion—one man’s argument—that privacy is not as important as other things. The argument appears unrooted in anything more than Etzioni’s opinions. “

We have a long tradition of protecting individual rights. And we have processes for discovering the common good, such as markets, in which individual preferences agglomerate to sort it out for us. On the rare occassions when markets fail, political legislation and regulation may be a necessary substitute for natural processes. Somewhere quite a bit further down the list falls the technique “ask Amitai Etzioni.”

Unclear on Internet Security and Surveillance

The Washington Post has a poorly thought through editorial today on the Justice Department’s “CALEA for the Cloud” initiative. That’s the formative proposal to require all Internet services to open back doors to their systems for court-ordered government surveillance.

“Some privacy advocates and technology experts have sounded alarms,” says the Post, “arguing that such changes would make programs more vulnerable to hackers.”

Those advocates—of privacy and security both—are right. Julian Sanchez recently described here how unknown hackers exploited surveillance software to eavesdrop on high government officials in Greece.

“Some argue that because the vast majority of users are law-abiding citizens, the government must accept the risk that a few criminals or terrorists may rely on the same secure networks.”

That view is also correct. The many benefits of giving the vast majority of law-abiding people secure communications outstrips the cost of allowing law-breakers also to have secure communications.

But the Post editorial goes on, sounding in certainty but exhibiting befuddlement.

The policy question is not difficult: The FBI should be able to quickly obtain court-approved information, particularly data related to a national security probe. Companies should work with the FBI to determine whether there are safe ways to provide access without inviting unwanted intrusions. In the end, there may not be a way to perfectly protect both interests — and the current state of technology may prove an impenetrable obstacle.

The policy question, which the Post piece begs, is actually very difficult. Would we be better off overall if most or all of the information that traverses the Internet were partially insecure so that the FBI could obtain court-approved information? What about protocols and communications that aren’t owned or controlled by the business sector—indeed, not controlled by anyone?

The Tahoe-LAFS secure online storage project, for example—an open-source project, not controlled by anyone—recently announced its intention not to compromise the security of the system by opening back doors.

The government could require the signatories to the statement to change the code they’re working on, but thousands of others would continue to work with versions of the code that are secure. As long as people are free to write their own code—and that will not change—there is no way to achieve selective government access that is also secure.

The current state of technology, thankfully, is an impenetrable obstacle to compromised security in the interest of government surveillance. The only conclusion here, which happily increases our security and liberty overall, is that everyone should have access to fully secure communications.

What Privacy Invasion Looks Like

The details of Tyler Clementi’s case are slowly revealing themselves. He was the Rutgers University freshman whose sex life was exposed on the Internet when fellow students Dharun Ravi and Molly Wei placed a webcam in his dorm room, transmitting the images that it captured in real time on the Internet. Shortly thereafter, Clementi committed suicide.

Whether Ravi and Wei acted out of anti-gay animus, titillation about Clementi’s sexual orientation, or simply titillation about sex, their actions were utterly outrageous, offensive, and outside of the bounds of decency. Moreover, according to Middlesex County, New Jersey prosecutors, they were illegal. Ravi and Wei have been charged with invasion of privacy.

This is what invasion of privacy looks like. It’s the outrageous, offensive, truly galling revelation of private facts like what happened in this case. Over the last 120 years, common law tort doctrine has evolved to find that people have a right not to suffer such invasions. New Jersey has apparently enshrined that right in a criminal statute.

The story illustrates how quaint are some of the privacy “invasions” we often discuss, such as the tracking of people’s web surfing by advertising networks. That information is not generally revealed in any meaningful way. It is simply being used to serve tailored ads.

This event also illustrates how privacy law is functioning in our society. It’s functioning fairly well. Law, of course, is supposed to reflect deeply held norms. Privacy norms—like the norm against exposing someone’s sexual activity without consent—are widely shared, so that the laws backing up those norms are rarely violated.

It is probably a common error to believe that law is “working” when it is exercised fairly often, fines and penalties being doled it with some routine. Holders of this view see law—more accurately, legislation—as a tool for shaping society, of course. Many of them would like to end the societal debate about online privacy, establishing a “uniform national privacy standard.” But nobody knows what that standard should be. The more often legal actions are brought against online service providers, the stronger is the signal that online privacy norms are unsettled. That privacy debate continues, and it should.

It is not debatable that what Ravi and Wei did to Tyler Clementi was profoundly wrong. That was a privacy invasion.

The OECD Privacy Guidelines at 30

If you blinked, you missed it. Heaven knows, I did. The OECD privacy guidelines celebrated their 30th birthday on Thursday last week. They were introduced as a Recommendation by the Council of the Organization for Economic Cooperation and Development on September 23, 1980, and were meant to harmonize global privacy regulation.

Should we fete the guidelines on their birthday, crediting how they have solved our privacy problems? Not so much. When they came out, people felt insecure about their privacy, and demand for national privacy legislation was rising, risking the creation of tensions among national privacy regimes. Today, people feel insecure about their privacy, and demand for national privacy legislation is rising, risking the creation of tensions among national privacy regimes. Which is to say, not much has been solved.

In 2002—and I’m still at this? Kill me now—I summarized the OECD Guidelines and critiqued them as follows on the “OECD Guidelines” Privacilla page.

The Guidelines, and the concept of “fair information practices” generally, fail to address privacy coherently and completely because they do not recognize a rather fundamental premise: the vast difference in rights, powers, and incentives between governments and the private sector. Governments have heavy incentives to use and sometimes misuse information. They may appropriately be controlled by “fair information practices.”

Private sector entities tend to have a balance of incentives, and they are subject to both legal and market-punishments when they misuse information. Saddling them with additional, top-down regulation in the form of “fair information practices” would raise the cost of goods and services to consumers without materially improving their privacy.

Not much has changed in my thinking, though today I would be more careful to emphasize that many FIPs are good practices. It’s just that they are good in some circumstances and not in others, some FIPs are in tension with other FIPs, and so on.

The OECD Guidelines and the many versions of FIPs are a sort of privacy bible to many people. But nobody actually lives by the book, and we wouldn’t want them to. Happy birthday anyway, OECD guidelines.

Speech, Privacy, and Government Infiltration

Yesterday, I mentioned a recent report from the Justice Department’s Office of the Inspector General on some potentially improper instances of FBI monitoring of domestic anti-war groups. It occurs to me that it also provides a useful data point that’s relevant to last week’s post about the pitfalls of thinking about the proper limits of government information gathering exclusively in terms of “privacy.”

As the report details, an agent in the FBI’s Pittsburgh office sent a confidential source to report on organizing meetings for anti-war marches held by the anarchist Pittsburgh Organizing Group (POG). The agent admitted to OIG that his motive was a general desire to cultivate an informant rather than any particular factually grounded investigative purpose. Unsurprisingly, reports generated by the source contained “no information remotely relevant to actual or potential criminal activity,” and at least one report was “limited to identifying information about the participants in a political discussion together with characterizations of the contents of the speech of the participants.” The agent dutifully recorded that at one such gathering “Meeting and discussion was primarily anti anything supported by the main stream [sic] American.”

Now, in fact, the OIG suggests that the retention in FBI records of personally identifiable information about citizens’ political speech, unrelated to any legitimate investigation into suspected violations of federal law, may well have violated the Privacy Act. But if we wanted to pick semantic nits, we could surely make the argument that this is not really an invasion of “privacy” as traditionally conceived—and certainly not as conceived by our courts. The gatherings don’t appear to have been very large—the source was able to get the names and ages of all present—but they were, in principle, announced on the Web and open to the public.

Fortunately, the top lawyer at the Pittsburgh office appears to have been duly appalled when he discovered what had been done, and made sure the agents in the office got a refresher training on the proper and improper uses of informants. But as a thought experiment, suppose this sort of thing were routine. Suppose that any “public” political meeting, at least for political views regarded as out of the mainstream, stood a good chance of being attended by a clandestine government informant, who would record the names of the participants and what each of them said, to be filed away in a database indefinitely.  Would you think twice before attending? If so, it suggests that the limits on state surveillance of the population appropriate to a free and democratic society are not exhausted by those aimed at protecting “privacy” in the familiar sense.