Tag: privacy

The Census’ Broken Privacy Promise

When the 1940 census was collected, the public was reassured that the information it gathered would be kept private. “No one has access to your census record except you,” the public was told. President Franklin Roosevelt said: “There need be no fear that any disclosure will be made regarding any individual or his affairs.”

Apparently the limits of what the government can do with census information have their limits. Today the 1940 census goes online.

When the Census Bureau transferred the data to the National Archives, it agreed to release of the data 72 years after its collection. So much for those privacy promises.

Adam Marcus of Tech Freedom writes on C|Net:

Eighty-seven percent of Americans can find a direct family link to one or more of the 132+ million people listed on those rolls. The 1940 census included 65 questions, with an additional 16 questions asked of a random 5 percent sample of people. You can find out what your father did, how much he made, or if he was on the dole. You may be able to find out if your mother had an illegitimate child before she married your father.

To be sure, this data will open a fascinating trove for researchers into life 70 years ago. But the Federal Trade Commission would not recognize a “fascinating trove” exception if a private company were to release data it had collected under promises of confidentiality.

Government officials endlessly point the finger at the private sector for being a privacy scourge. Senator Al Franken did last week in a speech to the American Bar Association last week (text; Fisking). He’s the chairman of a Senate subcommittee dedicated to examining the defects in private sector information practices. Meanwhile, the federal government is building a massive data and analysis center to warehouse information hoovered from our private communications, and the Obama Administration recently extended to five years the amount of time it can retain private information about Americans under no suspicion of ties to terrorism.

Marcus has the bare minimum lesson to take from this episode: “Remember this in 2020.”

Supreme Court: No Privacy Act Liability for Mental and Emotional Distress

Back in July of last year, I wrote about a case in the Supreme Court called FAA v. Cooper. In that Privacy Act case, a victim of a government privacy invasion had alleged “actual damages” based on evidence of mental and emotional distress.

Cooper, a recreational pilot who was HIV-positive, had chosen to conceal his health status generally, but revealed it to the Social Security Administration for the purposes of pursuing disability payments. When the SSA revealed that he was HIV-positive to the Department of Transportation, which was investigating pilot’s licenses in the hands of the medically unfit, the SSA violated the Privacy Act. Cooper claimed that he suffered mental and emotional distress at learning of the disclosure of his health status and inferentially his sexual orientation, which he had kept private.

The question before the Court was whether the Privacy Act’s grant of compensation for “actual damages” included damages for mental and emotional distress. This week the Court held … distressingly … [sorry, I had to] … NO. Under the doctrine of sovereign immunity, the Privacy Act has to be explicit about providing compensation for mental and emotional distress. Justice Alito wrote for a Court divided 5-3 along traditional ideological lines (Justice Kagan not participating).

The decision itself is a nice example of two sides contesting how statutory language should be interpreted. My preference would have been for the Court to hold that the Privacy Act recognizes mental and emotional distress. After all, a privacy violation is the loss of confident control over information, which, depending on the sensitivity and circumstances, can be very concerning and even devastating.

The existence of harm is a big elephant in the privacy room. Many advocates seem to be trying to lower the bar in terms of what constitutes harm, arguing that the creation of a risk is a harm or that worrisome information practices are harmful. But I think harm rises above doing things someone might find “worrisome.” Harm may occur, as in this case, when one’s (hidden) HIV status and thus sexual orientation is revealed. Harm has occurred when one records and uploads to the Internet another’s sexual activity. But I don’t think it’s harmful if a web site or ad network gathers from your web surfing that you’ve got an interest in outdoor sports.

The upshot of Cooper is this: Congress can and should amend the Privacy Act so that the damages it must compensate when it has harmed someone include real and proven mental and emotional distress.

FTC Issues Groundhog Report on Privacy

The Federal Trade Commission issued a report today calling on companies “to adopt best privacy practices.” In related news, most people support airline safety… The report also “recommends that Congress consider enacting general privacy legislation, data security and breach notification legislation, and data broker legislation.”

This is regulatory cheerleading of the same kind our government’s all-purpose trade regulator put out a dozen years ago. In May of 2000, the FTC issued a report finding “that legislation is necessary to ensure further implementation of fair information practices online” and recommending a framework for such legislation. Congress did not act on that, and things are humming along today without top-down regulation of information practices on the Internet.

By “humming along,” I don’t mean that all privacy problems have been solved. (And they certainly wouldn’t have been solved if Congress had passed a law saying they should be.) “Humming along” means that ongoing push-and-pull among companies and consumers is defining the information practices that best serve consumers in all their needs, including privacy.

Congress won’t be enacting legislation this year, and there doesn’t seem to be any groundswell for new regulation in the next Congress, though President Obama’s reelection would leave him unencumbered by future elections and so inclined to indulge the pro-regulatory fantasies of his supporters.

The folks who want regulation of the Internet in the name of privacy should explain how they will do better than Congress did with credit reporting. In forty years of regulating credit bureaus, Congress has not come up with a system that satisfies consumer advocates’ demands. I detail that government failure in my recent Cato Policy Analysis, “Reputation under Regulation: The Fair Credit Reporting Act at 40 and Lessons for the Internet Privacy Debate.”

Viral Video Strips Down Strip-Search Machines

The TSA’s response yesterday to a video challenging strip-search machines was so weak that it acts as a virtual confession to the fact that objects can be snuck through them.

In the video, TSA strip-search objector Jonathan Corbett demonstrates how he put containers in his clothes along his sides where they would appear the same as the background in TSA’s displays. TSA doesn’t refute that it can be done or that Corbett did it in his demonstration. More at Wired’s Threat Level blog.

More than six months ago, the D.C. Circuit Court of Appeals required the Transportation Security Administration to commence a rulemaking to justify its strip-search machine/prison-style pat-down policy. TSA has not done so. The result is that the agency still does not have a sturdy security system in place at airports. It’s expensive, inconvenient, error-prone, and privacy-invasive.

Making airline security once again the responsibility of airlines and airports would vastly improve the situation, because these actors are naturally inclined to blend security, cost-control, and convenience with customer service and comforts, including privacy.

I have a slight difference with Corbett’s characterization of the problem. The weakness of body scanners does not put the public at great danger. The chance of anyone exploiting this vulnerability and smuggling a bomb on board a domestic U.S. flight is very low. The problem is that these machines impose huge costs in dollars and privacy that do not foreclose a significant risk any better than the traditional magnetometer.

Corbett is right when he urges people to “demand of your legislators and presidential candidates that they get rid of this eight billion-dollar-a-year waste known as the TSA and privatize airport security.”

A ‘Privacy Bill of Rights’: Second Verse, Same as the First

The White House announces a “privacy bill of rights” today. We went over this a year ago, when Senators Kerry (D-MA) and McCain (R-AZ) introduced their “privacy bill of rights.”

The post is called “The ‘Privacy Bill of Rights’ Is in the Bill of Rights,” and its admonitions apply equally well today:

It takes a lot of gall to put the moniker “Privacy Bill of Rights” on legislation that reduces liberty in the information economy while the Fourth Amendment remains tattered and threadbare. Nevermind “reasonable expectations”: the people’s right to be secure against unreasonable searches and seizures is worn down to the nub.

Senators Kerry and McCain [and now the White House] should look into the privacy consequences of the Internal Revenue Code. How is privacy going to fare under Obamacare? How is the Department of Homeland Security doing with its privacy efforts? What is an “administrative search”?

The Government’s Surveillance-Security Fantasies

If two data points are enough to draw a trend line, the trend I’ve spotted is government seeking to use data mining where it doesn’t work.

A comment in the Chronicle of Higher Education recently argued that universities should start mining data about student behavior in order to thwart incipient on-campus violence.

Existing technology … offers universities an opportunity to gaze into their own crystal balls in an effort to prevent large-scale acts of violence on campus. To that end, universities must be prepared to use data mining to identify and mitigate the potential for tragedy.

No, it doesn’t. And no, they shouldn’t.

Jeff Jonas and I wrote in our 2006 Cato Policy Analysis, “Effective Counterterrorism and the Limited Role of Predictive Data Mining,” that data mining doesn’t have the capacity to predict rare events like terrorism or school shootings. The precursors of such events are not consistent the way, say, credit card fraud is.

Data mining for campus violence would produce many false leads while missing real events. The costs in dollars and privacy would not be rewarded by gains in security and safety.

The same is true of foreign uprisings. They have gross commonality—people rising up against their governments—but there will be no pattern in data from past events in, say, Egypt, that would predict how events will unfold in, say, China.

But an AP story on Military.com reports that various U.S. security and law enforcement agencies want to mine publicly available social media for evidence of forthcoming terror attacks and uprisings. The story is called “US Seeks to Mine Social Media to Predict Future.”

Gathering together social media content has privacy costs, even if each bit of data was released publicly online. And it certainly has dollar costs that could be quite substantial. But the benefits would be slim indeed.

I’m with the critics who worry about overreliance on technology rather than trained and experienced human analysts. Is it too much to think that the U.S. might have to respond to events carefully and thoughtfully as they unfold? People with cultural, historical, and linguistic knowledge seem far better suited to predicting and responding to events in their regions of focus than any algorithm.

There’s a dream, I suppose, that data mining can eliminate risk or make the future knowable. It can’t, and—the future is knowable in one sense—it won’t.

Silicon Valley Doesn’t Care About Privacy, Security

That’s the buzz in the face of the revelation that a mobile social network called Path was copying address book information from users’ iPhones without notifying them. Path’s voluble CEO David Morin dismissed this as a problem until, as Nick Bilton put it on the New York TimesBits blog, he “became uncharacteristically quiet as the Internet disagreed and erupted in outrage.”

After Morin belatedly apologized and promised to destroy the wrongly gotten data, some of Silicon Valley’s heavyweights closed ranks around him. This raises the question whether “the management philosophy of ‘ask for forgiveness, not permission’ is becoming the ‘industry best practice’ ” in Silicon Valley.

Since the first big privacy firestorm (which I put in 1999, with DoubleClick/Abacus), cultural differences have been at the core of these controversies. The people inside the offending companies are utterly focused on the amazing things they plan to do with consumer data. In relation to their astoundingly (ahem) path-breaking plans, they can’t see how anyone could object. They’re wrong, of course, and when they meet sufficient resistance, they and their peers have to adjust to the reality that people don’t see the value they believe they’ll provide nor do people consent to the uses of data they’re making.

This conversation—the push and pull between innovative-excessive companies and a more reticent public made up of engineers, advocates, and ordinary people—is where the privacy policies of the future are being set. When we see legislation proposed in Congress and enforcement action from the FTC, these things are whitecaps on much more substantial waves of societal development.

An interesting contrast is the (ahem) innovative lawsuit that the Electronic Privacy Information Center filed against the Federal Trade Commission last week. EPIC is asking the court to compel the FTC to act against Google, which recently changed and streamlined its privacy policies. EPIC is unlikely to prevail—the court will be loathe to deprive the agency of discretion this way—but EPIC is working very hard to make Washington, D.C. the center of society when it comes to privacy and related values.

Washington, D.C. has no capacity to tune the balances between privacy and other values. And Silicon Valley is not a sentient being. (Heck, it’s not even a valley!) If a certain disregard for privacy and data security has developed among innovators over-excited about their plans for the digital world, that’s wrong. If a company misusing data has harmed consumers, it should pay to make those consumers whole. Path is, of course, paying various reputation costs for getting it crosswise to consumer sentiment.

And that’s the right thing. The company should answer to the community (and no other authority). This conversation is the corrective.