Tag: privacy

TSA’s Partial Retreat From Full-Body Scans

It’s tempting to believe that the Transportation Security Administration’s move to change the software in strip-search machines is a response to the court ruling finding that it violated the law in rolling out the machines, but it’s almost surely coincidence.

The new software will show items that the software deems suspicious on a generic outline of a body rather than showing a detailed body image. The change will indeed reduce the invasiveness of the machine strip-search process. And because the image is less revealing, it can be viewed in the screening area instead of at a remote location. That means there doesn’t need to be a person dedicated to looking at denuded images of travelers. A major cost of running these machines—payroll—drops by a substantial margin.

The software will almost certainly not do as good a job of discovering hidden weapons as a human looking at a detailed image would. If it’s calibrated to over-report, TSA agents will rightly start to ignore its alerts on belt buckles and underwire bras. If it’s calibrated to under-report, well, it might fail to alert on an actual weapon or bomb. But those things are exceedingly rare, and the increased risk probably won’t make a difference.

In fact, that’s the interesting thing happening here: the TSA is allowing a small increase in risk in exchange for large gains in privacy and cost savings. The reason it took years of complaints, litigation, legislation, and other conflict is because the TSA did not analyze the risks and its responses before going forward with strip-search machines as it did. Trial-and-error isn’t costly to the government. The taxpayer fronts the money and gives up the privacy.

None of this means the TSA has now gotten the balance right. The airport security gauntlet will still be an overwrought mess and an affront to constitutional liberty. We will have to remain insistent on principle, on dignity and privacy, and on sound risk management while TSA gets a public relations bump from being less awful than it was before.

Obama Administration Fights Privacy Act Liability

In February 2004, privacy advocates were put off by a Supreme Court case called Doe v. Chao, in which the Court found that the Privacy Act requires a victim of a government privacy violation to show “actual damages” before receiving any compensation. The Act appeared to provide for $1,000 per violation in statutory damages, but the Court interpreted the legislation to require that actual damages be proven, after which the victim would be entitled to a minimum award of $1,000. (Statutory damages are appropriate in privacy cases against the government because government bureaucrats pay little price themselves when their agency gets fined. A penalty is required to draw oversight and political attention to violations of the law.)

Doe v. Chao was a close call given the statutory language, and the Court chose the outcome that would limit the government’s exposure to Privacy Act liability. Doing so marginally weakened the government’s attentiveness to the already insubstantial protections of the Privacy Act.

A companion case to Doe v. Chao has now reached the Supreme Court. FAA v. Cooper, which the highest court recently agreed to hear, involves a victim of a government privacy invasion who alleges “actual damages” based on evidence of mental and emotional distress. Cooper, a recreational pilot who was HIV-positive, had chosen to conceal his health status generally, but revealed it to the Social Security Administration for the purposes of pursuing disability payments. When the SSA revealed that he was HIV-positive to the Department of Transportation, it violated the Privacy Act. Cooper claims in court that he suffered mental and emotional distress at learning of the disclosure of his health status and inferentially his sexual orientation, which he had kept private.

In the Ninth Circuit Court of Appeals and now in the Supreme Court, the Obama Administration has argued that it doesn’t have to pay the victim of this privacy violation because mental and emotional distress do not qualify as “actual damages.” No one disputes that Cooper has to present objective proof of harm as a check on the truth of his claims. But the government isn’t saying that Cooper is faking distress at having his health status and sexual orientation illegally exposed by the government. The government is arguing that the court should limit “actual damages” to economic injury simply because it’s the government being sued.

The doctrine of sovereign immunity holds that the state is generally not subject to lawsuits. The state can make itself liable by a clear statement in legislation that it agrees to be sued. In the Privacy Act, Congress did exactly that: it created a cause of action against the government for Privacy Act violations.

But now the Obama Administration is arguing that the statute should be interpreted narrowly based on sovereign immunity. It’s an attempt to limit Privacy Act liability once again, insulating government officials from consequences of their wrongdoing. The Court should reject the sovereign immunity argument. Congress made the government subject to suit, and the chips should fall where they may on the question of what constitutes “actual damages.”

Putting aside sovereign immunity, what about the “actual damages” question? Should the Court recognize mental and emotional distress as a harm coming from privacy violations?

Privacy is the subjective condition people enjoy when they have the power to control information about themselves and when they have exercised that power consistent with their interests and values. People can, and often do, maintain privacy in information they share with a limited audience for limited purposes. Privacy is violated when that sense of control and controlled sharing is upended.

A privacy violation is called a “violation” because of the loss of confident control over information, which, depending on the sensitivity and circumstances, can be very concerning and even devastating. When privacy violations have this effect–not idle worry about who knows what, but the shock and mortification of having specific, sensitive information wrested from one’s control and exposed–that’s the case when actual damages should probably be found. If the Privacy Act is to protect the interest after which it’s named, the Court will recognize proven mental and emotional suffering as “actual damages.”

Sorrell vs. IMS Health: Not a Privacy Case

The Supreme Court’s decision in Sorrell vs. IMS Health is being touted in many quarters as a privacy case, and a concerning one at that. Example: Senator Patrick Leahy (D-VT) released a statement saying “the Supreme Court has overturned a sensible Vermont law that sought to protect the privacy of the doctor-patient relationship.” That’s a stretch.

The Vermont law at issue restricted the sale, disclosure, and use of pharmacy records that revealed the prescribing practices of doctors if that information was to be used in marketing by pharmaceutical manufacturers. Under the law, prescription drug salespeople—“detailers” in industry parlance—could not access information about doctors’ prescribing to use in focusing their efforts. As the Court noted, the statute barred few other uses of this information.

It is a stretch to suggest that this is a privacy law, given the sharply limited scope of its “protections.” Rather, the law was intended to advance the state’s preferences in the area of drug prescribing, which skew toward generic drugs rather than name brands. The Court quoted the Vermont legislature itself, finding that the purpose of the law was to thwart “detailers, in particular those who promote brand-name drugs, convey[ing] messages that ‘are often in conflict with the goals of the state.’” Accordingly, the Court addressed the law as a content- and viewpoint-oriented regulation of speech which could not survive First Amendment scrutiny (something Cato and the Pacific Legal Foundation argued for in their joint brief.)

What about patients’ sensitive records? Again, the case was about data reflecting doctors’ prescribing practices, which could include as little as how many times per year they prescribe given drugs. (They probably include more detail than that.) The risk to patients is based on the idea that patients’ prescriptions might be gleaned through sufficient data-mining of doctors prescribing records (no doubt with other records appended). That’s a genuine problem, if largely theoretical given the availability and use of data today. Vermont is certainly free to address that problem head on in a law meant to actually protect patients’ privacy—against the state itself, for example. Better still, Vermonters and people across the country could rely on the better sources of rules in this new and challenging area: market pressure (to the extent possible in the health care area) and the (non-prescriptive, more adaptive) common law.

Whatever the way forward, Sorrell vs. IMS Health is not the privacy case some are making it out to be, it’s not the outrage some are making it out to be, and it’s not the last word on data use in our society.

Government Control of Language and Other Protocols

It might be tempting to laugh at France’s ban on words like “Facebook” and Twitter” in the media. France’s Conseil Supérieur de l’Audiovisuel recently ruled that specific references to these sites (in stories not about them) would violate a 1992 law banning “secret” advertising. The council was created in 1989 to ensure fairness in French audiovisual communications, such as in allocation of television time to political candidates, and to protect children from some types of programming.

Sure, laugh at the French. But not for too long. The United States has similarly busy-bodied regulators, who, for example, have primly regulated such advertising themselves. American regulators carefully oversee non-secret advertising, too. Our government nannies equal the French in usurping parents’ decisions about children’s access to media. And the Federal Communications Commission endlessly plays footsie with speech regulation.

In the United States, banning words seems too blatant an affront to our First Amendment, but the United States has a fairly lively “English only” movement. Somehow, regulating an entire communications protocol doesn’t have the same censorious stink.

So it is that our Federal Communications Commission asserts a right to regulate the delivery of Internet service. The protocols on which the Internet runs are communications protocols, remember. Withdraw private control of them and you’ve got a more thoroughgoing and insidious form of speech control: it may look like speech rights remain with the people, but government controls the medium over which the speech travels.

The government has sought to control protocols in the past and will continue to do so in the future. The “crypto wars,” in which government tried to control secure communications protocols, merely presage struggles of the future. Perhaps the next battle will be over BitCoin, an online currency that is resistant to surveillance and confiscation. In BitCoin, communications and value transfer are melded together. To protect us from the scourge of illegal drugs and the recently manufactured crime of “money laundering,” governments will almost certainly seek to bar us from trading with one another and transferring our wealth securely and privately.

So laugh at France. But don’t laugh too hard. Leave the smugness to them.

House Approps Strips TSA of Strip-Search Funds

The fiscal 2012 Department of Homeland Security spending bill is starting to make its way through the process, and the House Appropriations Committee said in a release today that “the bill does not provide $76 million requested by the President for 275 additional advanced inspection technology (AIT) scanners nor the 535 staff requested to operate them.”

If the House committee’s approach carries the day, there won’t be 275 more strip-search machines in our nation’s airports. No word on whether the committee will defund the operations of existing strip-search machines.

Saving money and reducing privacy invasion? Sounds like a win-win.

Want Privacy? We Start by Blinding You!

As I noted earlier, the Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law held a hearing this morning entitled: “Protecting Mobile Privacy: Your Smartphones, Tablets, Cell Phones and Your Privacy.” In it, Sentor Richard Blumenthal (D-CT) engaged in a fascinating colloquy with Google’s Alan Davidson.

Blumenthal pursued Davidson about the year-old incident in which Google’s Street View cars collected data on the location of WiFi nodes and mistakenly gathered snippets of “payload data”—that is, the data traveling over open WiFi networks in the moments when their Street View cars were passing by.

Some payload data may have contained personal information including passwords. Google has meekly been working with data protection authorities around the world since then, hoping once and for all to delete this unneeded and unwanted data.

Blumenthal was prosecutorial in tone, but made a classic prosecutor’s error: He asked questions to which he didn’t know the answers.

Isn’t “payload data” extremely valuable for mapping WiFi networks?, queried Senator Blumenthal.

Davidson’s answer, and the consensus of panelists: Ummmm, no, not really.

(If you were to map pay phones, it wouldn’t matter whether people were talking on them, either, or what they were saying.)

Despite looking foolish, Senator Blumenthal persisted, asking Davidson whether collecting “payload data” should be illegal. Davidson demurred, but it’s a fascinating question.

Should it be against the law to collect data from open WiFi networks? That is, to observe radio signals passing your location on a public street? Should the government determine when you can collect radio signals, or what bands of the radio spectrum you may observe? What should you be allowed to do with information carried on a radio signal that you inadvertently capture?

If the government should have this power, the same logic would support making it illegal to collect photons that arrive at your eyes or that enter your camera lens. The government might proscribe collecting sound waves that come to your ears or microphone.

Laws against observing the world around you would certainly protect privacy! Let the government blind us all, and privacy will flourish. But this is not privacy protection anyone should want.

To understand privacy, you have to understand a little physics. As I said in an earlier comment on Google’s collection of open WiFi data:

Given the way radio works, and the common security/privacy response—encryption—it’s hard to characterize data sent in the clear as private. The people operating them may have wanted their communications to be private. They may have thought their communications were private. But they were sending out their communications in the clear, by radio—like a little radio station broadcasting to anyone in range.

Trying to protect privacy in unencrypted radio broadcasts (like public displays or publically made sounds) is like trying to reverse the flow of a river—it’s a huge engineering project. Senator Blumenthal would start to protect your privacy by blinding you to the world around you. Then narrow exceptions would determine what radio signals, lights, and sounds you are allowed to observe…