Tag: privacy

What Privacy Invasion Looks Like

The details of Tyler Clementi’s case are slowly revealing themselves. He was the Rutgers University freshman whose sex life was exposed on the Internet when fellow students Dharun Ravi and Molly Wei placed a webcam in his dorm room, transmitting the images that it captured in real time on the Internet. Shortly thereafter, Clementi committed suicide.

Whether Ravi and Wei acted out of anti-gay animus, titillation about Clementi’s sexual orientation, or simply titillation about sex, their actions were utterly outrageous, offensive, and outside of the bounds of decency. Moreover, according to Middlesex County, New Jersey prosecutors, they were illegal. Ravi and Wei have been charged with invasion of privacy.

This is what invasion of privacy looks like. It’s the outrageous, offensive, truly galling revelation of private facts like what happened in this case. Over the last 120 years, common law tort doctrine has evolved to find that people have a right not to suffer such invasions. New Jersey has apparently enshrined that right in a criminal statute.

The story illustrates how quaint are some of the privacy “invasions” we often discuss, such as the tracking of people’s web surfing by advertising networks. That information is not generally revealed in any meaningful way. It is simply being used to serve tailored ads.

This event also illustrates how privacy law is functioning in our society. It’s functioning fairly well. Law, of course, is supposed to reflect deeply held norms. Privacy norms—like the norm against exposing someone’s sexual activity without consent—are widely shared, so that the laws backing up those norms are rarely violated.

It is probably a common error to believe that law is “working” when it is exercised fairly often, fines and penalties being doled it with some routine. Holders of this view see law—more accurately, legislation—as a tool for shaping society, of course. Many of them would like to end the societal debate about online privacy, establishing a “uniform national privacy standard.” But nobody knows what that standard should be. The more often legal actions are brought against online service providers, the stronger is the signal that online privacy norms are unsettled. That privacy debate continues, and it should.

It is not debatable that what Ravi and Wei did to Tyler Clementi was profoundly wrong. That was a privacy invasion.

The OECD Privacy Guidelines at 30

If you blinked, you missed it. Heaven knows, I did. The OECD privacy guidelines celebrated their 30th birthday on Thursday last week. They were introduced as a Recommendation by the Council of the Organization for Economic Cooperation and Development on September 23, 1980, and were meant to harmonize global privacy regulation.

Should we fete the guidelines on their birthday, crediting how they have solved our privacy problems? Not so much. When they came out, people felt insecure about their privacy, and demand for national privacy legislation was rising, risking the creation of tensions among national privacy regimes. Today, people feel insecure about their privacy, and demand for national privacy legislation is rising, risking the creation of tensions among national privacy regimes. Which is to say, not much has been solved.

In 2002—and I’m still at this? Kill me now—I summarized the OECD Guidelines and critiqued them as follows on the “OECD Guidelines” Privacilla page.

The Guidelines, and the concept of “fair information practices” generally, fail to address privacy coherently and completely because they do not recognize a rather fundamental premise: the vast difference in rights, powers, and incentives between governments and the private sector. Governments have heavy incentives to use and sometimes misuse information. They may appropriately be controlled by “fair information practices.”

Private sector entities tend to have a balance of incentives, and they are subject to both legal and market-punishments when they misuse information. Saddling them with additional, top-down regulation in the form of “fair information practices” would raise the cost of goods and services to consumers without materially improving their privacy.

Not much has changed in my thinking, though today I would be more careful to emphasize that many FIPs are good practices. It’s just that they are good in some circumstances and not in others, some FIPs are in tension with other FIPs, and so on.

The OECD Guidelines and the many versions of FIPs are a sort of privacy bible to many people. But nobody actually lives by the book, and we wouldn’t want them to. Happy birthday anyway, OECD guidelines.

Speech, Privacy, and Government Infiltration

Yesterday, I mentioned a recent report from the Justice Department’s Office of the Inspector General on some potentially improper instances of FBI monitoring of domestic anti-war groups. It occurs to me that it also provides a useful data point that’s relevant to last week’s post about the pitfalls of thinking about the proper limits of government information gathering exclusively in terms of “privacy.”

As the report details, an agent in the FBI’s Pittsburgh office sent a confidential source to report on organizing meetings for anti-war marches held by the anarchist Pittsburgh Organizing Group (POG). The agent admitted to OIG that his motive was a general desire to cultivate an informant rather than any particular factually grounded investigative purpose. Unsurprisingly, reports generated by the source contained “no information remotely relevant to actual or potential criminal activity,” and at least one report was “limited to identifying information about the participants in a political discussion together with characterizations of the contents of the speech of the participants.” The agent dutifully recorded that at one such gathering “Meeting and discussion was primarily anti anything supported by the main stream [sic] American.”

Now, in fact, the OIG suggests that the retention in FBI records of personally identifiable information about citizens’ political speech, unrelated to any legitimate investigation into suspected violations of federal law, may well have violated the Privacy Act. But if we wanted to pick semantic nits, we could surely make the argument that this is not really an invasion of “privacy” as traditionally conceived—and certainly not as conceived by our courts. The gatherings don’t appear to have been very large—the source was able to get the names and ages of all present—but they were, in principle, announced on the Web and open to the public.

Fortunately, the top lawyer at the Pittsburgh office appears to have been duly appalled when he discovered what had been done, and made sure the agents in the office got a refresher training on the proper and improper uses of informants. But as a thought experiment, suppose this sort of thing were routine. Suppose that any “public” political meeting, at least for political views regarded as out of the mainstream, stood a good chance of being attended by a clandestine government informant, who would record the names of the participants and what each of them said, to be filed away in a database indefinitely.  Would you think twice before attending? If so, it suggests that the limits on state surveillance of the population appropriate to a free and democratic society are not exhausted by those aimed at protecting “privacy” in the familiar sense.

“This cries out for a Jim Harper rant.”

… says a colleague. Or maybe it speaks for itself.

Sheriffs want lists of patients using painkillers

Assuming, for the sake of argument, that the War on Drugs is meant to help people: The helping hand of government strips away privacy before it goes to work.

My 2004 Cato Policy analysis on point is: “Understanding Privacy – and the Real Threats to It.”

Competing Naïvetés: How to Produce a Privacy-Protective Society

My Economist.com debate on whether governments should “do far more to protect online privacy” has now concluded. The vote on the motion went to my opponent, supporting government involvement by a margin of 52 to 48 percent.

I won a moral victory, perhaps, moving the vote from 70 percent in favor of government intervention to the very close ending tally. My commentary highlighting the substantial role of government in undermining privacy seems to have begun moving the dial in my direction.

A pleasant side-effect of the debate was to open lines of communication with a number of my privacy-advocate colleagues, many of whom do not share my libertarian outlook. One called me naïve to think consumers can successfully demand privacy given the imposing wall of corporate practices that rely on intensive and comprehensive data collection.

Full health privacy, for example, would require a marketplace in which consumers can pay cash for services or demand that information about their treatments not be shared. It is illegal for a pharmacy to fill a prescription without identifying the patient apparently, a requirement that sets up the conditions for nationwide tracking of patients’ medicines and, inferentially, their health conditions.

This prescription tracking is facilitated and reinforced by government regulation, of course. Consumers cannot exercise privacy self-help when the law requires pharmacies to collect information about them. Freedom to pay cash for medicines, and to do so unidentified, is at best a long way off, to be sure.

But I had suggested the naïveté of the pro-government view as well:

The arguments for government control certainly seem to rest on good-hearted premises: if we just elect the right people, and if they just do the right thing, then we can have a cadre of public-spirited civil servants dispassionately carrying out a neutral, effective privacy-protection regime.

But this romantic vision of government seems never to come true. Crass political dealmaking inhabits every step, from the financing of elections, to logrolling in the legislative process, to implementation that favours agencies’ interests and the preferences of the politically powerful.

For government to protect privacy, the ideal of “clean government” would have to be realized. But proposals to move policy in that direction, such as regulations on how elections are financed, happen to conflict with fundamental American rights like freedom of speech and petition. Public financing would make the government itself politicians’ most important constituent, ripping it loose from the moorings that protect individual rights and liberties.

A host of legislative process reforms might only begin to drive a wedge between politicians and what they do best. And the ideal of a neutral, scientific regulatory process has not materialized. Regulation is a different, more obscure forum for expressions of political power. For this reason and others, regulation is poorly suited to balancing all the interests of consumers compared to market processes, which are the best method we have for discovering consumers’ true interests and apportioning resources accordingly.

I’ll take my naïveté over the alternative. Reducing the power of government and thereby setting the conditions for consumer-centered privacy protection seems a more likely prospect than taking the politics out of politics, which is an even bigger, even more forbidding project.

Economist Debate: ‘Governments Must Do Far More to Protect Online Privacy’

I’m at the mid-point of an online debate hosted by the Economist.com on the proposition: “This house believes that governments must do far more to protect online privacy.”

I’m on the “No” side. In my opening statement, I tried to give some definition to the many problems referred to as “privacy,” and I argued for personal responsibility on the part of Internet users. I even gave out instructions for controlling cookies, by which people can deny ad networks their most common source of consumer demographic information if they wish. Concluding, I said:

Government “experts” should not dictate social rules. Rather, interactions among members of the internet community should determine the internet’s social and business norms.

In the “rebuttal” stage, which started today, I dedicated most of my commentary to documenting how governments undermine privacy—and I barely scratched the surface.

Along with surveillance program after surveillance program, I discussed how government biases protocols and technologies against privacy, using the Social Security number as an example. I don’t know what syndrome causes many privacy advocates to seek protection in the arms of governments, which are systematic and powerful privacy abusers themselves.

Nonetheless, I’m opposing the “free lunch” argument, which holds that a group of government experts can come up with neutral and balanced, low-cost solutions to many different online problems without thwarting innovation. Right now the voting is with the guy offering people the free lunch, not the guy arguing for consumer education and personal responsibility.

You can vote here.

GPS Tracking and a ‘Mosaic Theory’ of Government Searches

The Electronic Frontier Foundation trumpets a surprising privacy win last week in the U.S. Court of Appeals for the D.C. Circuit. In U.S. v. Maynard (PDF), the court held that the use of a GPS tracking device to monitor the public movements of a vehicle—something the Supreme Court had held not to constitute a Fourth Amendment search in U.S. v Knotts—could nevertheless become a search when conducted over an extended period.  The Court in Knotts had considered only tracking that encompassed a single journey on a particular day, reasoning that the target of surveillance could have no “reasonable expectation of privacy” in the fact of a trip that any member of the public might easily observe. But the Knotts Court explicitly reserved judgment on potential uses of the technology with broader scope, recognizing that “dragnet” tracking that subjected large numbers of people to “continuous 24-hour surveillance.” Here, the DC court determined that continuous tracking for a period of over a month did violate a reasonable expectation of privacy—and therefore constituted a Fourth Amendment search requiring a judicial warrant—because such intensive secretive tracking by means of public observation is so costly and risky that no  reasonable person expects to be subject to such comprehensive surveillance.

Perhaps ironically, the court’s logic here rests on the so-called “mosaic theory” of privacy, which the government has relied on when resisting Freedom of Information Act requests.  The theory holds that pieces of information that are not in themselves sensitive or potentially injurious to national security can nevertheless be withheld, because in combination (with each other or with other public facts) permit the inference of facts that are sensitive or secret.  The “mosaic,” in other words, may be far more than the sum of the individual tiles that constitute it. Leaving aside for the moment the validity of the government’s invocation of this idea in FOIA cases, there’s an obvious intuitive appeal to the idea, and indeed, we see that it fits our real world expectations about privacy much better than the cruder theory that assumes the sum of “public” facts must always be itself a public fact.

Consider an illustrative hypothetical.  Alice and Bob are having a romantic affair that, for whatever reason, they prefer to keep secret. One evening before a planned date, Bob stops by the corner pharmacy and—in full view of a shop full of strangers—buys some condoms.  He then drives to a restaurant where, again in full view of the other patrons, they have dinner together.  They later drive in separate cars back to Alice’s house, where the neighbors (if they care to take note) can observe from the presence of the car in the driveway that Alice has an evening guest for several hours. It being a weeknight, Bob then returns home, again by public roads. Now, the point of this little story is not, of course, that a judicial warrant should be required before an investigator can physically trail Bob or Alice for an evening.  It’s simply that in ordinary life, we often reasonably suppose the privacy or secrecy of certain facts—that Bob and Alice are having an affair—that could in principle be inferred from the combination of other facts that are (severally) clearly public, because it would be highly unusual for all of them to be observed by the same public.   Even more so when, as in Maynard, we’re talking not about the “public” events of a single evening, but comprehensive observation over a period of weeks or months.  One must reasonably expect that “anyone” might witness any of such a series of events; it does not follow that one cannot reasonably expect that no particular person or group would be privy to all of them. Sometimes, of course, even our reasonable expectations are frustrated without anyone’s rights being violated: A neighbor of Alice’s might by chance have been at the pharmacy and then at the restaurant. But as the Supreme Court held in Kyllo v US, even when some information might in principle be possible to obtain public observation, the use of technological means not in general public use to learn the same facts may nevertheless qualify as a Fourth Amendment search, especially when the effect of technology is to render easy a degree of monitoring that would otherwise be so laborious and costly as to normally be infeasible.

Now, as Orin Kerr argues at the Volokh Conspiracy, significant as the particular result in this case might be, it’s the approach to Fourth Amendment privacy embedded here that’s the really big story. Orin, however, thinks it a hopelessly misguided one—and the objections he offers are all quite forceful.  Still, I think on net—especially as technology makes such aggregative monitoring more of a live concern—some kind of shift to a “mosaic” view of privacy is going to be necessary to preserve the practical guarantees of the Fourth Amendment, just as in the 20th century a shift from a wholly property-centric to a more expectations-based theory was needed to prevent remote sensing technologies from gutting its protections. But let’s look more closely at Orin’s objections.

First, there’s the question of novelty. Under the mosaic theory, he writes:

[W]hether government conduct is a search is measured not by whether a particular individual act is a search, but rather whether an entire course of conduct, viewed collectively, amounts to a search. That is, individual acts that on their own are not searches, when committed in some particular combinations, become searches. Thus in Maynard, the court does not look at individual recordings of data from the GPS device and ask whether they are searches. Instead, the court looks at the entirety of surveillance over a one-month period and views it as one single “thing.” Off the top of my head, I don’t think I have ever seen that approach adopted in any Fourth Amendment case.

I can’t think of one that explicitly adopts that argument.  But consider again the Kyllo case mentioned above.  Without a warrant, police used thermal imaging technology to detect the presence of marijuana-growing lamps within a private home from a vantage point on a public street. In a majority opinion penned by Justice Scalia, the court balked at this: The scan violated the sanctity and privacy of the home, though it involved no physical intrusion, by revealing the kind of information that might trigger Fourth Amendment scrutiny. But stop and think for a moment about how thermal imaging technology works, and try to pinpoint where exactly the Fourth Amendment “search” occurs.  The thermal radiation emanating from the home was, well… emanating from the home, and passing through or being absorbed by various nearby people and objects. It beggars belief to think that picking up the radiation could in itself be a search—you can’t help but do that!

When the radiation is actually measured, then? More promising, but then any use of an infrared thermometer within the vicinity of a home might seem to qualify, whether or not the purpose of the user was to gather information about the home, and indeed, whether or not the thermometer was precise enough to reveal any useful information about internal temperature variations within the home.  The real privacy violation here—the disclosure of private facts about the interior of the home—occurs only when a series of very many precise measurements of emitted radiation are processed into a thermographic image.  To be sure, it is counterintuitive to describe this as a “course of conduct” because the aggregation and analysis are done quite quickly within the processor of the thermal camera, which makes it natural to describe the search as a single act: Creating a thermal image.  But if we zoom in, we find that what the Court deemed an unconstitutional invasion of privacy was ultimately the upshot of a series of “public” facts about ambient radiation levels, combined and analyzed in a particular way.  The thermal image is, in a rather literal sense, a mosaic.

The same could be said about long-distance  spy microphones: Vibrating air is public; conversations are private. Or again, consider location tracking, which is unambiguously a “search” when it extends to private places: It might be that what is directly measured is only the “public” fact about the strength of a particular radio signal at a set of receiver sites; the “private” facts about location could be described as a mere inference, based on triangulation analysis (say), from the observable public facts.

There’s also a scope problem. When, precisely, do individual instances of permissible monitoring become a search requiring judicial approval? That’s certainly a thorny question, but it arises as urgently in the other type of hypothetical case alluded to in Knotts, involving “dragnet” surveillance of large numbers of individuals over time. Here, too, there’s an obvious component of duration: Nobody imagines that taking a single photograph revealing the public locations of perhaps hundreds of people at a given instant constitutes a Fourth Amendment search. And just as there’s no precise number of grains of sand that constitutes a “heap,” there’s no obvious way to say exactly what number of people, observed for how long, are required to distinguish individualized tracking from “dragnet” surveillance.  But if we anchor ourselves in the practical concerns motivating the adoption of the Fourth Amendment, it seems clear enough that an interpretation that detected no constitutional problem with continuous monitoring of every public movement of every citizen would mock its purpose. If we accept that much, a line has to be drawn somewhere. As I recall, come to think of it, Orin has himself proposed a procedural dichotomy between electronic searches that are “person-focused” and those that are “data-focused.”  This approach has much to recommend it, but is likely to present very similar boundary-drawing problems.

Orin also suggests that the court improperly relies upon a “probabilistic” model of the Fourth Amendment here (looking to what expectations about monitoring are empirically reasonable) whereas the Court has traditionally relied on a “private facts” model to deal with cases involving new technologies (looking to which types of information it is reasonable to consider private by their nature). Without recapitulating the very insightful paper linked above, the boundaries between models in Orin’s highly useful schema do not strike me as quite so bright. The ruling in Kyllo, after all, turned in part on the fact that infrared imaging devices are not in “general public use,” suggesting that the identification of “private facts” itself has an empirical and probabilistic component.  The analyses aren’t really separate. What’s crucial to bear in mind is that there are always multiple layers of facts involved with even a relatively simple search: Facts about the strength of a particular radio signal, facts about a location in a public or private place at a particular instant, facts about Alice and Bob’s affair. In cases involving new technologies, the problem—though seldom stated explicitly—is often precisely which domain of facts to treat as the “target” of the search. The point of the expectations analysis in Maynard is precisely to establish that there is a domain of facts about macro-level behavioral patterns distinct from the unambiguously public facts about specific public movements at particular times, and that we have different attitudes about these domains.

Sorting all this out going forward is likely to be every bit as big a headache as Orin suggests. But if the Fourth Amendment has a point—if it enjoins us to preserve a particular balance between state power and individual autonomy—then as technology changes, its rules of application may need to get more complicated to track that purpose, as they did when the Court ruled that an admirably simple property rule was no longer an adequate criterion for identifying a “search.”  Otherwise we make Fourth Amendment law into a cargo cult, a set of rituals whose elegance of form is cold consolation for their abandonment of function.