Tag: privacy

A ‘Privacy Bill of Rights’: Second Verse, Same as the First

The White House announces a “privacy bill of rights” today. We went over this a year ago, when Senators Kerry (D-MA) and McCain (R-AZ) introduced their “privacy bill of rights.”

The post is called “The ‘Privacy Bill of Rights’ Is in the Bill of Rights,” and its admonitions apply equally well today:

It takes a lot of gall to put the moniker “Privacy Bill of Rights” on legislation that reduces liberty in the information economy while the Fourth Amendment remains tattered and threadbare. Nevermind “reasonable expectations”: the people’s right to be secure against unreasonable searches and seizures is worn down to the nub.

Senators Kerry and McCain [and now the White House] should look into the privacy consequences of the Internal Revenue Code. How is privacy going to fare under Obamacare? How is the Department of Homeland Security doing with its privacy efforts? What is an “administrative search”?

The Government’s Surveillance-Security Fantasies

If two data points are enough to draw a trend line, the trend I’ve spotted is government seeking to use data mining where it doesn’t work.

A comment in the Chronicle of Higher Education recently argued that universities should start mining data about student behavior in order to thwart incipient on-campus violence.

Existing technology … offers universities an opportunity to gaze into their own crystal balls in an effort to prevent large-scale acts of violence on campus. To that end, universities must be prepared to use data mining to identify and mitigate the potential for tragedy.

No, it doesn’t. And no, they shouldn’t.

Jeff Jonas and I wrote in our 2006 Cato Policy Analysis, “Effective Counterterrorism and the Limited Role of Predictive Data Mining,” that data mining doesn’t have the capacity to predict rare events like terrorism or school shootings. The precursors of such events are not consistent the way, say, credit card fraud is.

Data mining for campus violence would produce many false leads while missing real events. The costs in dollars and privacy would not be rewarded by gains in security and safety.

The same is true of foreign uprisings. They have gross commonality—people rising up against their governments—but there will be no pattern in data from past events in, say, Egypt, that would predict how events will unfold in, say, China.

But an AP story on Military.com reports that various U.S. security and law enforcement agencies want to mine publicly available social media for evidence of forthcoming terror attacks and uprisings. The story is called “US Seeks to Mine Social Media to Predict Future.”

Gathering together social media content has privacy costs, even if each bit of data was released publicly online. And it certainly has dollar costs that could be quite substantial. But the benefits would be slim indeed.

I’m with the critics who worry about overreliance on technology rather than trained and experienced human analysts. Is it too much to think that the U.S. might have to respond to events carefully and thoughtfully as they unfold? People with cultural, historical, and linguistic knowledge seem far better suited to predicting and responding to events in their regions of focus than any algorithm.

There’s a dream, I suppose, that data mining can eliminate risk or make the future knowable. It can’t, and—the future is knowable in one sense—it won’t.

Silicon Valley Doesn’t Care About Privacy, Security

That’s the buzz in the face of the revelation that a mobile social network called Path was copying address book information from users’ iPhones without notifying them. Path’s voluble CEO David Morin dismissed this as a problem until, as Nick Bilton put it on the New York TimesBits blog, he “became uncharacteristically quiet as the Internet disagreed and erupted in outrage.”

After Morin belatedly apologized and promised to destroy the wrongly gotten data, some of Silicon Valley’s heavyweights closed ranks around him. This raises the question whether “the management philosophy of ‘ask for forgiveness, not permission’ is becoming the ‘industry best practice’ ” in Silicon Valley.

Since the first big privacy firestorm (which I put in 1999, with DoubleClick/Abacus), cultural differences have been at the core of these controversies. The people inside the offending companies are utterly focused on the amazing things they plan to do with consumer data. In relation to their astoundingly (ahem) path-breaking plans, they can’t see how anyone could object. They’re wrong, of course, and when they meet sufficient resistance, they and their peers have to adjust to the reality that people don’t see the value they believe they’ll provide nor do people consent to the uses of data they’re making.

This conversation—the push and pull between innovative-excessive companies and a more reticent public made up of engineers, advocates, and ordinary people—is where the privacy policies of the future are being set. When we see legislation proposed in Congress and enforcement action from the FTC, these things are whitecaps on much more substantial waves of societal development.

An interesting contrast is the (ahem) innovative lawsuit that the Electronic Privacy Information Center filed against the Federal Trade Commission last week. EPIC is asking the court to compel the FTC to act against Google, which recently changed and streamlined its privacy policies. EPIC is unlikely to prevail—the court will be loathe to deprive the agency of discretion this way—but EPIC is working very hard to make Washington, D.C. the center of society when it comes to privacy and related values.

Washington, D.C. has no capacity to tune the balances between privacy and other values. And Silicon Valley is not a sentient being. (Heck, it’s not even a valley!) If a certain disregard for privacy and data security has developed among innovators over-excited about their plans for the digital world, that’s wrong. If a company misusing data has harmed consumers, it should pay to make those consumers whole. Path is, of course, paying various reputation costs for getting it crosswise to consumer sentiment.

And that’s the right thing. The company should answer to the community (and no other authority). This conversation is the corrective.

The Senate’s SOPA Counterattack?: Cybersecurity the Undoing of Privacy

The Daily Caller reports that Senator Harry Reid (D-NV) is planning another effort at Internet regulation—right on the heels of the SOPA/PIPA debacle. The article seems calculated to insinuate that a follow-on to SOPA/PIPA might slip into cybersecurity legislation the Senate plans to take up. Whether that’s in the works or not, I’ll detail here the privacy threats in cybersecurity language being circulated on the Hill.

A Senate draft currently making the rounds is called the “Cybersecurity Information Sharing Act of 2012.” It sets up “cybersecurity exchanges” at which government and corporate entities would share threat information and solutions.

Sharing of information does not require federal approval or planning, of course. Information sharing happens all the time according to market processes. But “information sharing” is the solution Congress has seized upon, so federal information sharing programs we will have. Think of all this as a “see something, say something” campaign for corporate computer security people. Or perhaps “e-fusion centers.”

Reading over the draft, I was struck by sweeping language purporting to create “affirmative authority to monitor and defend against cybersecurity threats.” To understand the strangeness of these words, we must start at the beginning:

We live in a free country where all that is not forbidden is allowed. There is no need in such a country for “affirmative” authority to act. So what does this section do as it in purports to permit private and governmental entities to monitor their information systems, operate active defenses, and such? It sweeps aside nearly all other laws controlling them.

“Consistent with the Constitution of the United States and notwithstanding and other provision of law,” it says (emphasis added), entities may act to preserve the security of their systems. This means that the only law controlling their actions would be the Constitution.

It’s nice that the Constitution would apply</sarcasm>, but the obligations in the Privacy Act of 1974 would not. The Electronic Communications Privacy Act would be void. Even the requirements of the E-Government Act of 2002, such as privacy impact assessments, would be swept aside.

The Constitution doesn’t constrain private actors, of course. This language would immunize them from liability under any and all regulation and under state or common law. Private actors would not be subject to suit for breaching contractual promises of confidentiality. They would not be liable for violating the privacy torts. Anything goes so long as one can make a claim to defending “information systems,” a term that refers to anything having to do with computers.

Elsewhere, the bill creates an equally sweeping immunity against law-breaking so long as the law-breaking provides information to a “cybersecurity exchange.” This is a breath-taking exemption from the civil and criminal laws that protect privacy, among other things.

(1) IN GENERAL.—No civil or criminal cause of action shall lie or be maintained in any Federal or State court against any non-Federal governmental or private entity, or any officer, employee, or agent of such an entity, and any such action shall be dismissed promptly, for the disclosure of a cybersecurity threat indicator to—
(A) a cybersecurity exchange under subsection (a)(1); or
(B) a private entity under subsection, (b)(1), provided the cybersecurity threat indicator is promptly shared with a cybersecurity exchange.

In addition to this immunity from suit, the bill creates an equally sweeping “good faith” defense:

Where a civil or criminal cause of action is not barred under paragraph (1), a good faith reliance by any person on a legislative authorization, a statutory authorization, or a good faith determination that this Act permitted the conduct complained of, is a complete defense against any civil or criminal action brought under this Act or any other law.

Good faith is a question of fact, and a corporate security official could argue successfully that she acted in good faith if a government official told her to turn over private data. This language allows the corporate sector to abandon its responsibility to follow the law in favor of following government edicts. We’ve seen attacks on the rule of law like this before.

A House Homeland Security subcommittee marked up a counterpart to this bill last week. It does not have similar language that I could find.

In 2009, I testified in the House Science Committee on cybersecurity, skeptical of the government’s ability to tackle cybersecurity but cognizant that the government must secure its own systems. “Cybersecurity exchanges” are a blind stab at addressing the many challenges in securing computers, networks, and data, and I think they are unnecessary at best. According to current plans, cybersecurity exchanges come at a devastating cost to our online privacy.

Congress seems poised once again to violate the rule from the SOPA/PIPA disaster: “First, do no harm to the Internet.”

Kashmir Hill Has It Right…

on the Google privacy policy change.

The idea that people should be able to opt out of a company’s privacy policy strikes me as ludicrous.

Plus she embeds a valuable discussion among her Xtranormal friends. Highlight:

“Well, members of Congress don’t send angry letters about privacy issues very often.”

“Oh, well, actually, they do.”

Read the whole thing. Watch the whole thing. And, if you actually care, take some initiative to protect your privacy from Google, a thing you are well-empowered to do by the browser and computer you are using to view this post.

The Second-Day Story on U.S. v. Jones

Does a more careful reading of the Supreme Court’s decision in U.S. v. Jones turn up a lurking victory for the government?

Modern media moves so fast that the second-day story happens in the afternoon of the first. The Supreme Court ruled unanimously Monday morning that government agents conduct a Fourth Amendment search when they place a GPS device on a private vehicle and use it to monitor a suspect’s whereabouts for weeks at a time. Monday afternoon, a couple of commentators suggested that the case is less a win than many thought because it didn’t explicitly rule that a warrant is required to attach a GPS device to a vehicle.

Writing on the Volokh Conspiracy blog, George Washington University law professor Orin Kerr noted “What Jones Does Not Hold.”

The Court declined to reach when the installation of the device is reasonable or unreasonable. … So we actually don’t yet know if a warrant is required to install a GPS device; we just know that the installation of the device is a Fourth Amendment “search.”

And over on Scotusblog, Tom Goldstein found that “The Government Fared Much Better Than Everyone Realizes”:

[D]oes the “search” caused by installing a GPS device require a warrant? The answer may be no, given that no member of the Court squarely concludes it does and four members of the Court (those who join the Alito concurrence) do not believe it constitutes a search at all.

So there is a constitutional search when the government attaches a GPS device to a vehicle, but the Court conspicuously declined to say that such a search requires a warrant. Do we have an “a-ha” moment?

When the Supreme Court granted certiorari in the case, it took the unusual step of adding to the questions it wanted addressed. In addition to “[w]hether the warrantless use of a tracking device on respondent’s vehicle to monitor its movements on public streets violated the Fourth Amendment,” the Court wanted to know “whether the government violated respondent’s Fourth Amendment rights by installing the GPS tracking device on his vehicle without a valid warrant and without his consent.” These are both compound questions, but the dimension added by the second is the Fourth Amendment meaning of attaching a device to a vehicle. The case was about attaching a device to a vehicle, and if the Court didn’t walk through every clause in each of the questions presented, that’s why.

On that central question in the case, the government argued the following: “Attaching the GPS tracking device to respondent’s vehicle was not a search or seizure under the Fourth Amendment.” The government lost, full stop.

Now, it’s true that the Court’s majority opinion didn’t explictly find that the “search” that occurs when attaching and using a GPS device requires a warrant, but look at its characterization of the opinion it affirmed: “The United States Court of Appeals for the District of Columbia Circuit reversed [Jones’s] conviction because of admission of the evidence obtained by warrantless use of the GPS device which, it said, violated the Fourth Amendment.”

The Court did decline to consider the argument that the government might be able to attach a device based on reasonable suspicion or probable cause—that argument was “forfeited” by the government’s failure to raise it in the lower courts—but if the Supreme Court were limiting its holding to the attachment-as-search issue, it would have remanded the case back to the lower courts for further proceedings consistent with the opinion. It did not, and the sensible inference to draw from that is that the general rule applies: a warrant is required in the absence of one of the customary exceptions. Failing to make that explicit was not “opening a door” to a latent government victory. U.S. v. Jones was a unanimous decision rejecting the government’s warrantless use of outré technology to defeat the natural privacy protections provided by law and physics.

At least one serious lawyer I know has raised the point that I address here, and it is a real one, but some in the commentariat are a little too showy with their analysis and far too willing to go looking for a government victory in what is nothing other than a government defeat.

U.S. v. Jones: A Big Privacy Win

The Supreme Court has delivered a big win for privacy in U.S. v. Jones. That’s the case in which government agents placed a GPS device on a car and used it to track a person round-the-clock for four weeks. The question before the Court was whether the government may do this in the absence of a valid warrant. All nine justices say No.

That’s big, important news. The Supreme Court will not allow developments in technology to outstrip constitutional protections the way it did in Olmstead.

Olmstead v. United States was a 1928 decision in which the Court held that there was no Fourth Amendment search or seizure involved in wiretapping because law enforcement made “no entry of the houses or offices of the defendants.” It took 39 years for the Court to revisit that restrictive, property-based ruling and find that Fourth Amendment interests exist outside of buildings. “[T]he Fourth Amendment protects people, not places” went the famous line from Katz v. United States (1967), which has been the lodestar ever since.

For its good outcome, though, Katz has not served the Fourth Amendment and privacy very well. The Cato Institute’s brief argued to the Court that the doctrine arising from Katz “is weak as a rule for deciding cases.” As developed since 1967, “the ‘reasonable expectation of privacy’ test reverses the inquiry required by the Fourth Amendment and biases Fourth Amendment doctrine against privacy.”

Without rejecting Katz and reasonable expectations, the Jones majority returned to property rights as a basis for Fourth Amendment protection. “The Government physically occupied private property for the purpose of obtaining information” when it attached a GPS device to a private vehicle and used it to gather information. This was a search that the government could not conduct without a valid warrant.

The property rationale for deciding the case had the support of five justices, led by Justice Scalia. The other four justices would have used “reasonable expectations” to decide the same way, so they concurred in the judgement but not the decision. They found many flaws in the use of property and “18th-century tort law” to decide the case.

Justice Sotomayor was explicit in supporting both rationales for protecting privacy. With Justice Scalia, she argued, “When the Government physically invades personal property to gather information, a search occurs.” This language—more clear, and using the legal term of art “personal property,” which Justica Scalia did not—would seem to encompass objects like cell phones, the crucial tool we use today to collect, maintain, and transport our digital effects. Justice Sotomayor emphasized in her separate concurrence that the majority did not reject Katz and “reasonable expectations” in using property as the grounds for this decision.

Justice Sotomayor also deserves special notice for mentioning the pernicious third-party doctrine. “[I]t may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties.” The third-party doctrine cuts against our Fourth Amendment interests in information we share with ISPs, email service providers, financial services providers, and so on. Reconsidering it is very necessary.

Justice Alito’s concurrence is no ringing endorsement of the “reasonable expectation of privacy” test. But he and the justices joining him see many problems with applying Justice Scalia’s property rationale as they interpreted it.

Along with the Scalia-authored Kyllo decision of 2001, Jones is a break from precedent. It may seem like a return to the past, but it is also a return to a foundation on which privacy can be more secure.

More commentary here in the coming days and weeks will explore the case’s meaning more fully. Hopefully, more Supreme Court cases in coming years and decades will clarify and improve Fourth Amendment doctrine.