Tag: privacy

Unclear on Internet Security and Surveillance

The Washington Post has a poorly thought through editorial today on the Justice Department’s “CALEA for the Cloud” initiative. That’s the formative proposal to require all Internet services to open back doors to their systems for court-ordered government surveillance.

“Some privacy advocates and technology experts have sounded alarms,” says the Post, “arguing that such changes would make programs more vulnerable to hackers.”

Those advocates—of privacy and security both—are right. Julian Sanchez recently described here how unknown hackers exploited surveillance software to eavesdrop on high government officials in Greece.

“Some argue that because the vast majority of users are law-abiding citizens, the government must accept the risk that a few criminals or terrorists may rely on the same secure networks.”

That view is also correct. The many benefits of giving the vast majority of law-abiding people secure communications outstrips the cost of allowing law-breakers also to have secure communications.

But the Post editorial goes on, sounding in certainty but exhibiting befuddlement.

The policy question is not difficult: The FBI should be able to quickly obtain court-approved information, particularly data related to a national security probe. Companies should work with the FBI to determine whether there are safe ways to provide access without inviting unwanted intrusions. In the end, there may not be a way to perfectly protect both interests — and the current state of technology may prove an impenetrable obstacle.

The policy question, which the Post piece begs, is actually very difficult. Would we be better off overall if most or all of the information that traverses the Internet were partially insecure so that the FBI could obtain court-approved information? What about protocols and communications that aren’t owned or controlled by the business sector—indeed, not controlled by anyone?

The Tahoe-LAFS secure online storage project, for example—an open-source project, not controlled by anyone—recently announced its intention not to compromise the security of the system by opening back doors.

The government could require the signatories to the statement to change the code they’re working on, but thousands of others would continue to work with versions of the code that are secure. As long as people are free to write their own code—and that will not change—there is no way to achieve selective government access that is also secure.

The current state of technology, thankfully, is an impenetrable obstacle to compromised security in the interest of government surveillance. The only conclusion here, which happily increases our security and liberty overall, is that everyone should have access to fully secure communications.

What Privacy Invasion Looks Like

The details of Tyler Clementi’s case are slowly revealing themselves. He was the Rutgers University freshman whose sex life was exposed on the Internet when fellow students Dharun Ravi and Molly Wei placed a webcam in his dorm room, transmitting the images that it captured in real time on the Internet. Shortly thereafter, Clementi committed suicide.

Whether Ravi and Wei acted out of anti-gay animus, titillation about Clementi’s sexual orientation, or simply titillation about sex, their actions were utterly outrageous, offensive, and outside of the bounds of decency. Moreover, according to Middlesex County, New Jersey prosecutors, they were illegal. Ravi and Wei have been charged with invasion of privacy.

This is what invasion of privacy looks like. It’s the outrageous, offensive, truly galling revelation of private facts like what happened in this case. Over the last 120 years, common law tort doctrine has evolved to find that people have a right not to suffer such invasions. New Jersey has apparently enshrined that right in a criminal statute.

The story illustrates how quaint are some of the privacy “invasions” we often discuss, such as the tracking of people’s web surfing by advertising networks. That information is not generally revealed in any meaningful way. It is simply being used to serve tailored ads.

This event also illustrates how privacy law is functioning in our society. It’s functioning fairly well. Law, of course, is supposed to reflect deeply held norms. Privacy norms—like the norm against exposing someone’s sexual activity without consent—are widely shared, so that the laws backing up those norms are rarely violated.

It is probably a common error to believe that law is “working” when it is exercised fairly often, fines and penalties being doled it with some routine. Holders of this view see law—more accurately, legislation—as a tool for shaping society, of course. Many of them would like to end the societal debate about online privacy, establishing a “uniform national privacy standard.” But nobody knows what that standard should be. The more often legal actions are brought against online service providers, the stronger is the signal that online privacy norms are unsettled. That privacy debate continues, and it should.

It is not debatable that what Ravi and Wei did to Tyler Clementi was profoundly wrong. That was a privacy invasion.

The OECD Privacy Guidelines at 30

If you blinked, you missed it. Heaven knows, I did. The OECD privacy guidelines celebrated their 30th birthday on Thursday last week. They were introduced as a Recommendation by the Council of the Organization for Economic Cooperation and Development on September 23, 1980, and were meant to harmonize global privacy regulation.

Should we fete the guidelines on their birthday, crediting how they have solved our privacy problems? Not so much. When they came out, people felt insecure about their privacy, and demand for national privacy legislation was rising, risking the creation of tensions among national privacy regimes. Today, people feel insecure about their privacy, and demand for national privacy legislation is rising, risking the creation of tensions among national privacy regimes. Which is to say, not much has been solved.

In 2002—and I’m still at this? Kill me now—I summarized the OECD Guidelines and critiqued them as follows on the “OECD Guidelines” Privacilla page.

The Guidelines, and the concept of “fair information practices” generally, fail to address privacy coherently and completely because they do not recognize a rather fundamental premise: the vast difference in rights, powers, and incentives between governments and the private sector. Governments have heavy incentives to use and sometimes misuse information. They may appropriately be controlled by “fair information practices.”

Private sector entities tend to have a balance of incentives, and they are subject to both legal and market-punishments when they misuse information. Saddling them with additional, top-down regulation in the form of “fair information practices” would raise the cost of goods and services to consumers without materially improving their privacy.

Not much has changed in my thinking, though today I would be more careful to emphasize that many FIPs are good practices. It’s just that they are good in some circumstances and not in others, some FIPs are in tension with other FIPs, and so on.

The OECD Guidelines and the many versions of FIPs are a sort of privacy bible to many people. But nobody actually lives by the book, and we wouldn’t want them to. Happy birthday anyway, OECD guidelines.

Speech, Privacy, and Government Infiltration

Yesterday, I mentioned a recent report from the Justice Department’s Office of the Inspector General on some potentially improper instances of FBI monitoring of domestic anti-war groups. It occurs to me that it also provides a useful data point that’s relevant to last week’s post about the pitfalls of thinking about the proper limits of government information gathering exclusively in terms of “privacy.”

As the report details, an agent in the FBI’s Pittsburgh office sent a confidential source to report on organizing meetings for anti-war marches held by the anarchist Pittsburgh Organizing Group (POG). The agent admitted to OIG that his motive was a general desire to cultivate an informant rather than any particular factually grounded investigative purpose. Unsurprisingly, reports generated by the source contained “no information remotely relevant to actual or potential criminal activity,” and at least one report was “limited to identifying information about the participants in a political discussion together with characterizations of the contents of the speech of the participants.” The agent dutifully recorded that at one such gathering “Meeting and discussion was primarily anti anything supported by the main stream [sic] American.”

Now, in fact, the OIG suggests that the retention in FBI records of personally identifiable information about citizens’ political speech, unrelated to any legitimate investigation into suspected violations of federal law, may well have violated the Privacy Act. But if we wanted to pick semantic nits, we could surely make the argument that this is not really an invasion of “privacy” as traditionally conceived—and certainly not as conceived by our courts. The gatherings don’t appear to have been very large—the source was able to get the names and ages of all present—but they were, in principle, announced on the Web and open to the public.

Fortunately, the top lawyer at the Pittsburgh office appears to have been duly appalled when he discovered what had been done, and made sure the agents in the office got a refresher training on the proper and improper uses of informants. But as a thought experiment, suppose this sort of thing were routine. Suppose that any “public” political meeting, at least for political views regarded as out of the mainstream, stood a good chance of being attended by a clandestine government informant, who would record the names of the participants and what each of them said, to be filed away in a database indefinitely.  Would you think twice before attending? If so, it suggests that the limits on state surveillance of the population appropriate to a free and democratic society are not exhausted by those aimed at protecting “privacy” in the familiar sense.

“This cries out for a Jim Harper rant.”

… says a colleague. Or maybe it speaks for itself.

Sheriffs want lists of patients using painkillers

Assuming, for the sake of argument, that the War on Drugs is meant to help people: The helping hand of government strips away privacy before it goes to work.

My 2004 Cato Policy analysis on point is: “Understanding Privacy – and the Real Threats to It.”

Competing Naïvetés: How to Produce a Privacy-Protective Society

My Economist.com debate on whether governments should “do far more to protect online privacy” has now concluded. The vote on the motion went to my opponent, supporting government involvement by a margin of 52 to 48 percent.

I won a moral victory, perhaps, moving the vote from 70 percent in favor of government intervention to the very close ending tally. My commentary highlighting the substantial role of government in undermining privacy seems to have begun moving the dial in my direction.

A pleasant side-effect of the debate was to open lines of communication with a number of my privacy-advocate colleagues, many of whom do not share my libertarian outlook. One called me naïve to think consumers can successfully demand privacy given the imposing wall of corporate practices that rely on intensive and comprehensive data collection.

Full health privacy, for example, would require a marketplace in which consumers can pay cash for services or demand that information about their treatments not be shared. It is illegal for a pharmacy to fill a prescription without identifying the patient apparently, a requirement that sets up the conditions for nationwide tracking of patients’ medicines and, inferentially, their health conditions.

This prescription tracking is facilitated and reinforced by government regulation, of course. Consumers cannot exercise privacy self-help when the law requires pharmacies to collect information about them. Freedom to pay cash for medicines, and to do so unidentified, is at best a long way off, to be sure.

But I had suggested the naïveté of the pro-government view as well:

The arguments for government control certainly seem to rest on good-hearted premises: if we just elect the right people, and if they just do the right thing, then we can have a cadre of public-spirited civil servants dispassionately carrying out a neutral, effective privacy-protection regime.

But this romantic vision of government seems never to come true. Crass political dealmaking inhabits every step, from the financing of elections, to logrolling in the legislative process, to implementation that favours agencies’ interests and the preferences of the politically powerful.

For government to protect privacy, the ideal of “clean government” would have to be realized. But proposals to move policy in that direction, such as regulations on how elections are financed, happen to conflict with fundamental American rights like freedom of speech and petition. Public financing would make the government itself politicians’ most important constituent, ripping it loose from the moorings that protect individual rights and liberties.

A host of legislative process reforms might only begin to drive a wedge between politicians and what they do best. And the ideal of a neutral, scientific regulatory process has not materialized. Regulation is a different, more obscure forum for expressions of political power. For this reason and others, regulation is poorly suited to balancing all the interests of consumers compared to market processes, which are the best method we have for discovering consumers’ true interests and apportioning resources accordingly.

I’ll take my naïveté over the alternative. Reducing the power of government and thereby setting the conditions for consumer-centered privacy protection seems a more likely prospect than taking the politics out of politics, which is an even bigger, even more forbidding project.

Economist Debate: ‘Governments Must Do Far More to Protect Online Privacy’

I’m at the mid-point of an online debate hosted by the Economist.com on the proposition: “This house believes that governments must do far more to protect online privacy.”

I’m on the “No” side. In my opening statement, I tried to give some definition to the many problems referred to as “privacy,” and I argued for personal responsibility on the part of Internet users. I even gave out instructions for controlling cookies, by which people can deny ad networks their most common source of consumer demographic information if they wish. Concluding, I said:

Government “experts” should not dictate social rules. Rather, interactions among members of the internet community should determine the internet’s social and business norms.

In the “rebuttal” stage, which started today, I dedicated most of my commentary to documenting how governments undermine privacy—and I barely scratched the surface.

Along with surveillance program after surveillance program, I discussed how government biases protocols and technologies against privacy, using the Social Security number as an example. I don’t know what syndrome causes many privacy advocates to seek protection in the arms of governments, which are systematic and powerful privacy abusers themselves.

Nonetheless, I’m opposing the “free lunch” argument, which holds that a group of government experts can come up with neutral and balanced, low-cost solutions to many different online problems without thwarting innovation. Right now the voting is with the guy offering people the free lunch, not the guy arguing for consumer education and personal responsibility.

You can vote here.