Tag: privacy

FTC Issues Groundhog Report on Privacy

The Federal Trade Commission issued a report today calling on companies “to adopt best privacy practices.” In related news, most people support airline safety… The report also “recommends that Congress consider enacting general privacy legislation, data security and breach notification legislation, and data broker legislation.”

This is regulatory cheerleading of the same kind our government’s all-purpose trade regulator put out a dozen years ago. In May of 2000, the FTC issued a report finding “that legislation is necessary to ensure further implementation of fair information practices online” and recommending a framework for such legislation. Congress did not act on that, and things are humming along today without top-down regulation of information practices on the Internet.

By “humming along,” I don’t mean that all privacy problems have been solved. (And they certainly wouldn’t have been solved if Congress had passed a law saying they should be.) “Humming along” means that ongoing push-and-pull among companies and consumers is defining the information practices that best serve consumers in all their needs, including privacy.

Congress won’t be enacting legislation this year, and there doesn’t seem to be any groundswell for new regulation in the next Congress, though President Obama’s reelection would leave him unencumbered by future elections and so inclined to indulge the pro-regulatory fantasies of his supporters.

The folks who want regulation of the Internet in the name of privacy should explain how they will do better than Congress did with credit reporting. In forty years of regulating credit bureaus, Congress has not come up with a system that satisfies consumer advocates’ demands. I detail that government failure in my recent Cato Policy Analysis, “Reputation under Regulation: The Fair Credit Reporting Act at 40 and Lessons for the Internet Privacy Debate.”

Viral Video Strips Down Strip-Search Machines

The TSA’s response yesterday to a video challenging strip-search machines was so weak that it acts as a virtual confession to the fact that objects can be snuck through them.

In the video, TSA strip-search objector Jonathan Corbett demonstrates how he put containers in his clothes along his sides where they would appear the same as the background in TSA’s displays. TSA doesn’t refute that it can be done or that Corbett did it in his demonstration. More at Wired’s Threat Level blog.

More than six months ago, the D.C. Circuit Court of Appeals required the Transportation Security Administration to commence a rulemaking to justify its strip-search machine/prison-style pat-down policy. TSA has not done so. The result is that the agency still does not have a sturdy security system in place at airports. It’s expensive, inconvenient, error-prone, and privacy-invasive.

Making airline security once again the responsibility of airlines and airports would vastly improve the situation, because these actors are naturally inclined to blend security, cost-control, and convenience with customer service and comforts, including privacy.

I have a slight difference with Corbett’s characterization of the problem. The weakness of body scanners does not put the public at great danger. The chance of anyone exploiting this vulnerability and smuggling a bomb on board a domestic U.S. flight is very low. The problem is that these machines impose huge costs in dollars and privacy that do not foreclose a significant risk any better than the traditional magnetometer.

Corbett is right when he urges people to “demand of your legislators and presidential candidates that they get rid of this eight billion-dollar-a-year waste known as the TSA and privatize airport security.”

A ‘Privacy Bill of Rights’: Second Verse, Same as the First

The White House announces a “privacy bill of rights” today. We went over this a year ago, when Senators Kerry (D-MA) and McCain (R-AZ) introduced their “privacy bill of rights.”

The post is called “The ‘Privacy Bill of Rights’ Is in the Bill of Rights,” and its admonitions apply equally well today:

It takes a lot of gall to put the moniker “Privacy Bill of Rights” on legislation that reduces liberty in the information economy while the Fourth Amendment remains tattered and threadbare. Nevermind “reasonable expectations”: the people’s right to be secure against unreasonable searches and seizures is worn down to the nub.

Senators Kerry and McCain [and now the White House] should look into the privacy consequences of the Internal Revenue Code. How is privacy going to fare under Obamacare? How is the Department of Homeland Security doing with its privacy efforts? What is an “administrative search”?

The Government’s Surveillance-Security Fantasies

If two data points are enough to draw a trend line, the trend I’ve spotted is government seeking to use data mining where it doesn’t work.

A comment in the Chronicle of Higher Education recently argued that universities should start mining data about student behavior in order to thwart incipient on-campus violence.

Existing technology … offers universities an opportunity to gaze into their own crystal balls in an effort to prevent large-scale acts of violence on campus. To that end, universities must be prepared to use data mining to identify and mitigate the potential for tragedy.

No, it doesn’t. And no, they shouldn’t.

Jeff Jonas and I wrote in our 2006 Cato Policy Analysis, “Effective Counterterrorism and the Limited Role of Predictive Data Mining,” that data mining doesn’t have the capacity to predict rare events like terrorism or school shootings. The precursors of such events are not consistent the way, say, credit card fraud is.

Data mining for campus violence would produce many false leads while missing real events. The costs in dollars and privacy would not be rewarded by gains in security and safety.

The same is true of foreign uprisings. They have gross commonality—people rising up against their governments—but there will be no pattern in data from past events in, say, Egypt, that would predict how events will unfold in, say, China.

But an AP story on Military.com reports that various U.S. security and law enforcement agencies want to mine publicly available social media for evidence of forthcoming terror attacks and uprisings. The story is called “US Seeks to Mine Social Media to Predict Future.”

Gathering together social media content has privacy costs, even if each bit of data was released publicly online. And it certainly has dollar costs that could be quite substantial. But the benefits would be slim indeed.

I’m with the critics who worry about overreliance on technology rather than trained and experienced human analysts. Is it too much to think that the U.S. might have to respond to events carefully and thoughtfully as they unfold? People with cultural, historical, and linguistic knowledge seem far better suited to predicting and responding to events in their regions of focus than any algorithm.

There’s a dream, I suppose, that data mining can eliminate risk or make the future knowable. It can’t, and—the future is knowable in one sense—it won’t.

Silicon Valley Doesn’t Care About Privacy, Security

That’s the buzz in the face of the revelation that a mobile social network called Path was copying address book information from users’ iPhones without notifying them. Path’s voluble CEO David Morin dismissed this as a problem until, as Nick Bilton put it on the New York TimesBits blog, he “became uncharacteristically quiet as the Internet disagreed and erupted in outrage.”

After Morin belatedly apologized and promised to destroy the wrongly gotten data, some of Silicon Valley’s heavyweights closed ranks around him. This raises the question whether “the management philosophy of ‘ask for forgiveness, not permission’ is becoming the ‘industry best practice’ ” in Silicon Valley.

Since the first big privacy firestorm (which I put in 1999, with DoubleClick/Abacus), cultural differences have been at the core of these controversies. The people inside the offending companies are utterly focused on the amazing things they plan to do with consumer data. In relation to their astoundingly (ahem) path-breaking plans, they can’t see how anyone could object. They’re wrong, of course, and when they meet sufficient resistance, they and their peers have to adjust to the reality that people don’t see the value they believe they’ll provide nor do people consent to the uses of data they’re making.

This conversation—the push and pull between innovative-excessive companies and a more reticent public made up of engineers, advocates, and ordinary people—is where the privacy policies of the future are being set. When we see legislation proposed in Congress and enforcement action from the FTC, these things are whitecaps on much more substantial waves of societal development.

An interesting contrast is the (ahem) innovative lawsuit that the Electronic Privacy Information Center filed against the Federal Trade Commission last week. EPIC is asking the court to compel the FTC to act against Google, which recently changed and streamlined its privacy policies. EPIC is unlikely to prevail—the court will be loathe to deprive the agency of discretion this way—but EPIC is working very hard to make Washington, D.C. the center of society when it comes to privacy and related values.

Washington, D.C. has no capacity to tune the balances between privacy and other values. And Silicon Valley is not a sentient being. (Heck, it’s not even a valley!) If a certain disregard for privacy and data security has developed among innovators over-excited about their plans for the digital world, that’s wrong. If a company misusing data has harmed consumers, it should pay to make those consumers whole. Path is, of course, paying various reputation costs for getting it crosswise to consumer sentiment.

And that’s the right thing. The company should answer to the community (and no other authority). This conversation is the corrective.

The Senate’s SOPA Counterattack?: Cybersecurity the Undoing of Privacy

The Daily Caller reports that Senator Harry Reid (D-NV) is planning another effort at Internet regulation—right on the heels of the SOPA/PIPA debacle. The article seems calculated to insinuate that a follow-on to SOPA/PIPA might slip into cybersecurity legislation the Senate plans to take up. Whether that’s in the works or not, I’ll detail here the privacy threats in cybersecurity language being circulated on the Hill.

A Senate draft currently making the rounds is called the “Cybersecurity Information Sharing Act of 2012.” It sets up “cybersecurity exchanges” at which government and corporate entities would share threat information and solutions.

Sharing of information does not require federal approval or planning, of course. Information sharing happens all the time according to market processes. But “information sharing” is the solution Congress has seized upon, so federal information sharing programs we will have. Think of all this as a “see something, say something” campaign for corporate computer security people. Or perhaps “e-fusion centers.”

Reading over the draft, I was struck by sweeping language purporting to create “affirmative authority to monitor and defend against cybersecurity threats.” To understand the strangeness of these words, we must start at the beginning:

We live in a free country where all that is not forbidden is allowed. There is no need in such a country for “affirmative” authority to act. So what does this section do as it in purports to permit private and governmental entities to monitor their information systems, operate active defenses, and such? It sweeps aside nearly all other laws controlling them.

“Consistent with the Constitution of the United States and notwithstanding and other provision of law,” it says (emphasis added), entities may act to preserve the security of their systems. This means that the only law controlling their actions would be the Constitution.

It’s nice that the Constitution would apply</sarcasm>, but the obligations in the Privacy Act of 1974 would not. The Electronic Communications Privacy Act would be void. Even the requirements of the E-Government Act of 2002, such as privacy impact assessments, would be swept aside.

The Constitution doesn’t constrain private actors, of course. This language would immunize them from liability under any and all regulation and under state or common law. Private actors would not be subject to suit for breaching contractual promises of confidentiality. They would not be liable for violating the privacy torts. Anything goes so long as one can make a claim to defending “information systems,” a term that refers to anything having to do with computers.

Elsewhere, the bill creates an equally sweeping immunity against law-breaking so long as the law-breaking provides information to a “cybersecurity exchange.” This is a breath-taking exemption from the civil and criminal laws that protect privacy, among other things.

(1) IN GENERAL.—No civil or criminal cause of action shall lie or be maintained in any Federal or State court against any non-Federal governmental or private entity, or any officer, employee, or agent of such an entity, and any such action shall be dismissed promptly, for the disclosure of a cybersecurity threat indicator to—
(A) a cybersecurity exchange under subsection (a)(1); or
(B) a private entity under subsection, (b)(1), provided the cybersecurity threat indicator is promptly shared with a cybersecurity exchange.

In addition to this immunity from suit, the bill creates an equally sweeping “good faith” defense:

Where a civil or criminal cause of action is not barred under paragraph (1), a good faith reliance by any person on a legislative authorization, a statutory authorization, or a good faith determination that this Act permitted the conduct complained of, is a complete defense against any civil or criminal action brought under this Act or any other law.

Good faith is a question of fact, and a corporate security official could argue successfully that she acted in good faith if a government official told her to turn over private data. This language allows the corporate sector to abandon its responsibility to follow the law in favor of following government edicts. We’ve seen attacks on the rule of law like this before.

A House Homeland Security subcommittee marked up a counterpart to this bill last week. It does not have similar language that I could find.

In 2009, I testified in the House Science Committee on cybersecurity, skeptical of the government’s ability to tackle cybersecurity but cognizant that the government must secure its own systems. “Cybersecurity exchanges” are a blind stab at addressing the many challenges in securing computers, networks, and data, and I think they are unnecessary at best. According to current plans, cybersecurity exchanges come at a devastating cost to our online privacy.

Congress seems poised once again to violate the rule from the SOPA/PIPA disaster: “First, do no harm to the Internet.”

Kashmir Hill Has It Right…

on the Google privacy policy change.

The idea that people should be able to opt out of a company’s privacy policy strikes me as ludicrous.

Plus she embeds a valuable discussion among her Xtranormal friends. Highlight:

“Well, members of Congress don’t send angry letters about privacy issues very often.”

“Oh, well, actually, they do.”

Read the whole thing. Watch the whole thing. And, if you actually care, take some initiative to protect your privacy from Google, a thing you are well-empowered to do by the browser and computer you are using to view this post.