Tag: privacy

Secrecy or Privacy? The Power of Language

My friend Kelly Young notes (on Facebook) this Washington Post article on guns used in crimes:

I am awed again by the power of language. The Washingt0n Post today claims that government protection of the identity of lawful purchasers of legal weapons is “secrecy” to be “penetrated” for the sake of the paper’s reporting. It is not “privacy” that is “violated,” as with release of airport scans of travelers, gathering names of minors seeking abortions, and warrantless searches of homes. And how about those secret journalistic sources?

(Language cleaned up slightly, as the original was typed Blackberry-style.) He’s right. The word “privacy” doesn’t appear in the article. Maybe a cynics’ dictionary would read, “Privacy is the ability to keep facts about myself hidden from you. Secrecy is your keeping facts about yourself hidden from me.”

Does Risk Management Counsel in Favor of a Biometric Traveler Identity System?

Writing on Reason’s Hit & Run blog, Robert Poole argues that the Transportation Security Administration should use a risk-based approach to security. As I noted in my recent “’Strip-or-Grope’ vs. Risk Management” post, the Department of Homeland Security often talks about risk but fails to actually do risk management. Poole and I agree—everyone agrees—that DHS should use risk management. They just don’t.

With the pleasure of remembering our excellent 2005 Reason debate, “Transportation Security Aggravation,” I must again differ with Poole’s prescription, however.

Poole says TSA should separate travelers into three basic groups (quoting at length):

  1. Trusted Travelers, who have passed a background check and are issued a biometric ID card that proves (when they arrive at the security checkpoint) that they are the person who was cleared. This group would include cockpit crews, anyone holding a government security clearance, anyone already a member of the Department of Homeland Security’s Global Entry, Sentri, and Nexus, and anyone who applied and was accepted into a new Trusted Traveler program. These people would get to bypass regular security lanes  upon having their biometric card checked at the airport, subject only to random screening of a small fraction.
  2. High-risk travelers, either those about whom no information is known or who are flagged by the various Department of Homeland Security (DHS) intelligence lists as warranting “Selectee” status. They would be the only ones facing body-scanners or pat-downs as mandatory, routine screening.
  3. Ordinary travelers—basically everyone else, who would go through metal detector and put carry-ons through 2-D X-ray machines. They would not have to remove shoes or jackets, and could travel with liquids. A small fraction of this group would be subject to random “Selectee”-type screening.

He believes, and has argued for years, that dividing ”good guys” from “bad guys” will effectively secure. It’s certainly intuitive. Poole’s a good guy. I’m a good guy. You’re a good guy (in a non-gender-specific sense).

Knowing who people are works for us in every day life: Because we can find people who borrow our stuff, for example—and because we know that we can be found—we husband our behavior and generally don’t steal things from each other, we, the decent people with a stake in society.

Poole’s thinking takes our common experience and scales it up to a national program. Capture people’s identities, link enough biography to those identities, and—voila!—we know who the good guys are and who are the (potential) bad.

But precisely what biographical information assures that a person is “good”? (The proposal is for government action: it would be a violation of due process to keep the criteria secret and an equal protection violation to unfairly divide good and bad.) How do we know a person hasn’t gone bad from the time that their goodness was established?

The attacker we face with air security measures is not among the decent cohort whose behavior is channeled by identification. That attacker’s path to mischief is nicely mapped out by Poole’s proposal: Get into the Trusted Traveler group, or find someone who can get in it. (It’s easy to know if you’re a part of it. They give you a card! You can also test the system to see if you’ve been designated “high-risk” or “ordinary.”)

With a Trusted Traveler positioned to do wrong, chances are good that he or she won’t be subjected to screening and can carry whatever dangerous articles onto a plane. The end result? Predictable gnashing of teeth and wailing about a “failure to connect the dots.”

All this is not to say that Poole’s plan should not be adopted. If he can convince an airline of its merits, and the airline can convince its shareholders, insurers, airports, and their customers, they should implement the program to their heart’s content. They should reap the economic gain, too, when they prove that they have found a way to better serve the public’s safety, convenience, privacy, and transportation needs.

It is the TSA that should not implement this program. Along with what are significant security defects, it is the creation of a program that the government might use to control access to other goods, services, and infrastructure throughout society. The TSA would migrate toward conditioning all travel on having a government-issued biometric identity card. Fundamentally, the government should not be making these decisions or operating airline security systems.

A very interesting paper surfaced by recent public attention to this issue predicts that annual highway deaths will increase (from an already significant number) by between 11 and 275 because of people’s avoidance of privacy-invasive airport procedures. But what caught my eye in it were the following numbers:

During the past decade, terrorist attacks, with respect to air travel in the United States, have occurred three times involving six aircraft. Four planes were hijacked on 9/11, the shoe bomber incident occurred in December 2001, and, most recently, the Christmas Day underwear bomber attempted an attack in 2009. In that same span of time, over 99 million planes took off and landed within the United States, carrying over 7 billion passengers.

Especially because 9/11’s ”commandeering” attack on air travel has been essentially foreclosed by hardened cockpit doors and passenger/crew awareness, these numbers suggest the smallness of the chance that somone can elude worldwide investigatory pressure, prepare an explosive and detonator that actually work, smuggle both through conventional security, and successfully use them to take down a plane. It hasn’t happened in nearly 100 million flights.

This is not an argument to “let up” on security or to stop searching for measures that will cost-effectively drive the chance of attacker success even closer to zero.  But more thorough risk management analysis than mine or Bob Poole’s would probably show that accepting the above risk is preferable to either delaying and invading the bodily privacy of travelers or creating a biometric identity and background-check system.

Physician, Heal Thyself

The Wall Street Journal reports that the Commerce Department will soon come forth with a ”stepped-up approach to policing Internet privacy that calls for new laws and the creation of a new position to oversee the effort.”

Meanwhile, with nearly 22 months in office, President Obama has still not named a single candidate to the Privacy and Civil Liberties Oversight Board that Congress established to review the government’s actions in response to terrorism. Had he appointed a board, it would have issued three public reports by now, and we would be awaiting a fourth.

Should Legislatures, Commissions, and Such Figure Out Privacy Problems?

The recent European Commission proposal to create a radical and likely near impossible-to-implement “right to be forgotten” provides an opportunity to do some thinking about how privacy norms should be established.

In 1961, Italian liberal philosopher and lawyer Bruno Leoni published Freedom and the Law, an excellent, if dense, rumination on law and legislation, which, as he emphasized, are quite different things.

Legislation appears today to be a quick, rational, and far-reaching remedy against every kind of evil or inconvenience, as compared with, say, judicial decisions, the settlement of disputes by private arbiters, conventions, customs, and similar kinds of spontaneous adjustments on the part of individuals. A fact that almost always goes unnoticed is that a remedy by way of legislation may be too quick to be efficacious, too unpredictably far-reaching to be wholly beneficial, and too directly connected with the contingent views and interests of a handful of people (the legislators), whoever they may be, to be, in fact, a remedy for all concerned. Even when all this is noticed, the criticism is usually directed against particular statutes rather than against legislation as such, and a new remedy is always looked for in “better” statutes instead of in something altogether different from legislation. (page 7, 1991 Liberty Fund edition)

The new Commission proposal is an example. Apparently the EU’s 1995 Data Protection Directive didn’t do it.

Rather than some central authority, it is in vernacular practice that we should discover the appropriate “common” law, emphasizes Leoni.

“[A] legal system centered on legislation resembles … a centralized economy in which all the relevant decisions are made by a handful of directors, whose knowledge of the whole situation is fatally limited and whose respect, if any, for the people’s wishes is subject to that limitation. No solemn titles, no pompous ceremonies, no enthusiasm on the part of the applauding masses can conceal the crude fact that both the legislators and the directors of a centralized economy are only particular individuals like you and me, ignorant of 99 percent of what is going on around them as far as the real transactions, agreements, attitudes, feelings, and convictions of people are concerned. (page 22-23, emphasis removed)

The proposed “right to be forgotten” is a soaring flight of fancy, produced by detached intellects who lack the knowledge to devise appropriate privacy norms. If it were to move forward as is, it would cripple Europe’s information economy while hamstringing international data flows. More importantly, it would deny European consumers the benefits of a modernizing economy by giving them more privacy than they probably want.

I say “probably” because I don’t know what European consumers want. I only know how to learn what they want—and that is not by observing the dictates of the people who occupy Europe’s many government bureaucracies.

Privacy and the Common Good

Jim Harper’s post Monday, responding to communitarian Amitai Etzioni on “strip search” scanners at airports, gives me an opportunity to mount one of my hobbyhorses.

My beef with Etzioni’s conclusory argument isn’t just that, as Jim observes, he purports to “weigh” the individual right to privacy against the common good (here in the guise of “security”) without any real analysis of the magnitudes on both sides. It’s that his framing is fundamentally backwards. The importance of privacy is, to a great extent, a function of its collective dimension—a point to which you’d think a communitarian theorist who’s written an entire book on privacy would be more keenly attuned. If I may indulge in a little self-quotation:

[W]hen we talk about our First Amendment right to free speech, we understand it has a certain dual character: That there’s an individual right grounded in the equal dignity of free citizens that’s violated whenever I’m prohibited from expressing my views. But also a common or collective good that is an important structural precondition of democracy. As a citizen subject to democratic laws, I have a vested interest in the freedom of political discourse whether or not I personally want to [engage in]–or even listen to–controversial speech. Looking at the incredible scope of documented intelligence abuses from the ’60s and ’70s, we can add that I have an interest in knowing whether government officials are trying to silence or intimidate inconvenient journalists, activists, or even legislators. Censorship and arrest are blunt tactics I can see and protest; blackmail or a calculated leak that brings public disgrace are not so obvious. As legal scholar Bill Stuntz has argued, the Founders understood the structural value of the Fourth Amendment as a complement to the First, because it is very hard to make it a crime to pray the wrong way or to discuss radical politics if the police can’t arbitrarily see what people are doing or writing in their homes.

I’m actually somewhat sympathetic to the notion that the individual harms that result from strip scanners are relatively slight, especially when passengers can opt for a pat down instead. In the worst case scenario, some unscrupulous TSA employee might find a way to save and circulate some of these blurry quasi-nude images, the embarrassment potential of which is likely to be mitigated by the fact that the x-ray view doesn’t really show an identifiable face.

I’m much more concerned about the social effect of making such machines commonplace—of creating a general norm that people who wish to engage in routine travel must expect to expose themselves in this way. As Michel Foucault famously observed, surveillance is not merely the passive gathering of information; it exerts a “disciplinary” power, creating what he called “docile bodies.” The airport becomes a schoolhouse whose lesson is that not even the most intimate spaces escape the gaze of authority.

In his fine book The Naked Crowd, legal scholar Jeff Rosen recounts presenting his students and other audiences with a hypothetical choice between going through a strip scanner and a “Blob Machine”—a similar scanner programmed to filter out the passenger’s body image and project any foreign objects (as determined by density) on a generic wireframe mannequin. Though he assured them that the Blob Machine was just as accurate at detecting hidden objects, he found that in every group some significant number of people still preferred to subject themselves to the strip-scanner, in what Rosen calls “a ritualistic demonstration of their own purity and trustworthiness.” But there may be more to it than that. To expose oneself, render oneself vulnerable, is also closely linked to rituals of subordination—not just in human cultures, but in the animal kingdom. Think of the pack dog signaling his recognition of the alpha male’s (or owner’s) dominance by rolling over to expose his belly. In the context of pervasive fear of terrorism, this kind of routine exposure is a way of reassuring ourselves of the power of our protectors, quite apart from whatever immediate utility the strip-scanners have as a detection and deterrence mechanism. We ought to be a little wary of any “security” measures that seem to feed into that psychological mechanism.

While I don’t think these sorts of considerations ought to be dispositive by themselves in particular circumstances where a security measure is otherwise justifiable in more conventional cost-benefit terms, I think a communitarian commentator in particular ought to be a lot more sensitive to the cumulative cultural effect of many such measures. Formal institutions and rules are important to the preservation of free societies, but so are background norms and expectations. A society that comes to accept as normal the routine observation of our naked bodies by authority as an incident to travel is, I think, in danger of losing some important cultural capital.

The ‘Communitarian’ Defense of Strip-Search Machines

What’s most interesting about Amitai Etzioni’s defense of airport strip-search machines is how rootless his approach to privacy problems is.

[O]ur public-policy decisions must balance two core values: Our commitment to individual rights and our commitment to the common good. Neither is a priori privileged. Thus, when threatened by the lethal SARS virus, we demanded that contagious people stay home—even though this limited their freedom to assemble and travel—because the contribution to the common good was high and the intrusion limited. Yet we banned the trading of medical records because these trades constituted a severe intrusion, but had no socially redeeming merit.

I disagree with this formulation, and I don’t know that he has accurately depicted the law on ”trade” in medical records or the merits on that question. But more important here: these value-balancing precedents don’t guide his analysis of strip-search machines. Rather, he just concludes in favor of them using his own assessment of “the common good.”

At least Etzioni is consistent. I wrote in my 2005 Privacilla.org review of his book, The Limits of Privacy: “[T]he book amounts to little more than bare assertion—one man’s argument—that privacy is not as important as other things. The argument appears unrooted in anything more than Etzioni’s opinions. “

We have a long tradition of protecting individual rights. And we have processes for discovering the common good, such as markets, in which individual preferences agglomerate to sort it out for us. On the rare occassions when markets fail, political legislation and regulation may be a necessary substitute for natural processes. Somewhere quite a bit further down the list falls the technique “ask Amitai Etzioni.”

Unclear on Internet Security and Surveillance

The Washington Post has a poorly thought through editorial today on the Justice Department’s “CALEA for the Cloud” initiative. That’s the formative proposal to require all Internet services to open back doors to their systems for court-ordered government surveillance.

“Some privacy advocates and technology experts have sounded alarms,” says the Post, “arguing that such changes would make programs more vulnerable to hackers.”

Those advocates—of privacy and security both—are right. Julian Sanchez recently described here how unknown hackers exploited surveillance software to eavesdrop on high government officials in Greece.

“Some argue that because the vast majority of users are law-abiding citizens, the government must accept the risk that a few criminals or terrorists may rely on the same secure networks.”

That view is also correct. The many benefits of giving the vast majority of law-abiding people secure communications outstrips the cost of allowing law-breakers also to have secure communications.

But the Post editorial goes on, sounding in certainty but exhibiting befuddlement.

The policy question is not difficult: The FBI should be able to quickly obtain court-approved information, particularly data related to a national security probe. Companies should work with the FBI to determine whether there are safe ways to provide access without inviting unwanted intrusions. In the end, there may not be a way to perfectly protect both interests — and the current state of technology may prove an impenetrable obstacle.

The policy question, which the Post piece begs, is actually very difficult. Would we be better off overall if most or all of the information that traverses the Internet were partially insecure so that the FBI could obtain court-approved information? What about protocols and communications that aren’t owned or controlled by the business sector—indeed, not controlled by anyone?

The Tahoe-LAFS secure online storage project, for example—an open-source project, not controlled by anyone—recently announced its intention not to compromise the security of the system by opening back doors.

The government could require the signatories to the statement to change the code they’re working on, but thousands of others would continue to work with versions of the code that are secure. As long as people are free to write their own code—and that will not change—there is no way to achieve selective government access that is also secure.

The current state of technology, thankfully, is an impenetrable obstacle to compromised security in the interest of government surveillance. The only conclusion here, which happily increases our security and liberty overall, is that everyone should have access to fully secure communications.