Tag: privacy act

Connolly: Yes to Privacy Act Liability for Mental and Emotional Distress

A couple of years ago I wrote here about the Supreme Court case denying that a person could collect damages from the government under the Privacy Act based on mental and emotional distress. It’s a narrow point, but an important one, because the harm privacy invasions produce is often only mental and emotional distress. If such injuries aren’t recognized, the Privacy Act doesn’t offer much of a remedy.

Many privacy advocates have sought to bloat privacy regulation by lowering the “harm” bar. They argue that the creation of a privacy risk is a harm or that worrisome information practices are harmful. But I think harm rises above doing things someone might find “worrisome.” Harm can occur, as I think it may have in this case, when one’s (hidden) HIV status and thus sexual orientation is revealed. It’s shown by proving emotional distress to a judge or jury.

Rep. Gerry Connolly (D-VA) has introduced the fix for the Supreme Court’s overly narrow interpretation of the Privacy Act. His Safeguarding Individual Privacy Against Government Invasion Act of 2014 would allow for non-pecuniary damages—that is, mental and emotional distress—in Privacy Act cases.

It’s a simple fix to a contained problem in federal privacy legislation. It’s passage would not only close a gap in the statute. It would help channel the privacy discussion in the right way, toward real harms, which include provable mental and emotional distress.

Supreme Court: No Privacy Act Liability for Mental and Emotional Distress

Back in July of last year, I wrote about a case in the Supreme Court called FAA v. Cooper. In that Privacy Act case, a victim of a government privacy invasion had alleged “actual damages” based on evidence of mental and emotional distress.

Cooper, a recreational pilot who was HIV-positive, had chosen to conceal his health status generally, but revealed it to the Social Security Administration for the purposes of pursuing disability payments. When the SSA revealed that he was HIV-positive to the Department of Transportation, which was investigating pilot’s licenses in the hands of the medically unfit, the SSA violated the Privacy Act. Cooper claimed that he suffered mental and emotional distress at learning of the disclosure of his health status and inferentially his sexual orientation, which he had kept private.

The question before the Court was whether the Privacy Act’s grant of compensation for “actual damages” included damages for mental and emotional distress. This week the Court held … distressingly … [sorry, I had to] … NO. Under the doctrine of sovereign immunity, the Privacy Act has to be explicit about providing compensation for mental and emotional distress. Justice Alito wrote for a Court divided 5-3 along traditional ideological lines (Justice Kagan not participating).

The decision itself is a nice example of two sides contesting how statutory language should be interpreted. My preference would have been for the Court to hold that the Privacy Act recognizes mental and emotional distress. After all, a privacy violation is the loss of confident control over information, which, depending on the sensitivity and circumstances, can be very concerning and even devastating.

The existence of harm is a big elephant in the privacy room. Many advocates seem to be trying to lower the bar in terms of what constitutes harm, arguing that the creation of a risk is a harm or that worrisome information practices are harmful. But I think harm rises above doing things someone might find “worrisome.” Harm may occur, as in this case, when one’s (hidden) HIV status and thus sexual orientation is revealed. Harm has occurred when one records and uploads to the Internet another’s sexual activity. But I don’t think it’s harmful if a web site or ad network gathers from your web surfing that you’ve got an interest in outdoor sports.

The upshot of Cooper is this: Congress can and should amend the Privacy Act so that the damages it must compensate when it has harmed someone include real and proven mental and emotional distress.

Obama Administration Fights Privacy Act Liability

In February 2004, privacy advocates were put off by a Supreme Court case called Doe v. Chao, in which the Court found that the Privacy Act requires a victim of a government privacy violation to show “actual damages” before receiving any compensation. The Act appeared to provide for $1,000 per violation in statutory damages, but the Court interpreted the legislation to require that actual damages be proven, after which the victim would be entitled to a minimum award of $1,000. (Statutory damages are appropriate in privacy cases against the government because government bureaucrats pay little price themselves when their agency gets fined. A penalty is required to draw oversight and political attention to violations of the law.)

Doe v. Chao was a close call given the statutory language, and the Court chose the outcome that would limit the government’s exposure to Privacy Act liability. Doing so marginally weakened the government’s attentiveness to the already insubstantial protections of the Privacy Act.

A companion case to Doe v. Chao has now reached the Supreme Court. FAA v. Cooper, which the highest court recently agreed to hear, involves a victim of a government privacy invasion who alleges “actual damages” based on evidence of mental and emotional distress. Cooper, a recreational pilot who was HIV-positive, had chosen to conceal his health status generally, but revealed it to the Social Security Administration for the purposes of pursuing disability payments. When the SSA revealed that he was HIV-positive to the Department of Transportation, it violated the Privacy Act. Cooper claims in court that he suffered mental and emotional distress at learning of the disclosure of his health status and inferentially his sexual orientation, which he had kept private.

In the Ninth Circuit Court of Appeals and now in the Supreme Court, the Obama Administration has argued that it doesn’t have to pay the victim of this privacy violation because mental and emotional distress do not qualify as “actual damages.” No one disputes that Cooper has to present objective proof of harm as a check on the truth of his claims. But the government isn’t saying that Cooper is faking distress at having his health status and sexual orientation illegally exposed by the government. The government is arguing that the court should limit “actual damages” to economic injury simply because it’s the government being sued.

The doctrine of sovereign immunity holds that the state is generally not subject to lawsuits. The state can make itself liable by a clear statement in legislation that it agrees to be sued. In the Privacy Act, Congress did exactly that: it created a cause of action against the government for Privacy Act violations.

But now the Obama Administration is arguing that the statute should be interpreted narrowly based on sovereign immunity. It’s an attempt to limit Privacy Act liability once again, insulating government officials from consequences of their wrongdoing. The Court should reject the sovereign immunity argument. Congress made the government subject to suit, and the chips should fall where they may on the question of what constitutes “actual damages.”

Putting aside sovereign immunity, what about the “actual damages” question? Should the Court recognize mental and emotional distress as a harm coming from privacy violations?

Privacy is the subjective condition people enjoy when they have the power to control information about themselves and when they have exercised that power consistent with their interests and values. People can, and often do, maintain privacy in information they share with a limited audience for limited purposes. Privacy is violated when that sense of control and controlled sharing is upended.

A privacy violation is called a “violation” because of the loss of confident control over information, which, depending on the sensitivity and circumstances, can be very concerning and even devastating. When privacy violations have this effect–not idle worry about who knows what, but the shock and mortification of having specific, sensitive information wrested from one’s control and exposed–that’s the case when actual damages should probably be found. If the Privacy Act is to protect the interest after which it’s named, the Court will recognize proven mental and emotional suffering as “actual damages.”

Internet Privacy Law Needs an Upgrade

Imagine for a moment that all your computing devices had to run on code that had been written in 1986. Your smartphone is, alas, entirely out of luck, but your laptop or desktop computer might be able to get online using a dial-up modem. But you’d better be happy with a command-line interface to services like e-mail, Usenet, and Telnet, because the only “Web browsers” anyone’s heard of in 1986 are entomologists. Cloud computing? Location based services? Social networking? No can do, though you can still get into a raging debate about the relative merits of Macs and PCs.

When it comes to federal privacy law, alas, we are running on code written in 1986: The Elecronic Communications Privacy Act, a statute that’s not only ludicrously out of date, but so notoriously convoluted and unclear that even legal experts routinely lament the “mess” of electronic privacy law. Scholar Orin Kerr has called it “famously complex, if not entirely impenetrable.” Part of the problem, to be sure, lies with the courts.  It is scandalous that in 2010, we don’t even have a definitive ruling on whether or when the Fourth Amendment requires the government to get a search warrant to read e-mails stored on a server. But the ECPA statute, meant to fill the gap left by the courts, reads like the rules of James T. Kirk’s fictional card game Fizzbin.

Suppose the police want to read your e-mail. To come into your home and look through your computer, of course, they’d need a full Fourth Amendment search warrant based on probable cause. If they want to intercept the e-mail in transit, they have to go still further and meet the “super-warrant” standards of the Wiretap Act. Once it lands on your Internet Service Provider’s server, a regular search warrant is once again the standard—assuming your ISP is providing access “to the public.” If it’s a more closed network like your work account, your employer is permitted to voluntarily hand it over. But if you read the e-mail, or leave it on the server for more than 180 days, then suddenly your ISP has become a “remote computing service” provider rather than an “electronic communications service provider” vis a vis that e-mail. So instead of a probable cause warrant, police can get a 2703(d) order based on “specific and articulable facts” showing the information is “relevant and material” to an investigation—a much lower standard—provided they notify you. Except they can ask a judge to delay notification if they think that would impede the investigation. Oh, unless your ISP is in the Ninth Circuit, where opened e-mails still get the higher level of protection until they’ve “expired in the normal course,” whatever that means.

That’s for e-mail contents.  But maybe they don’t actually need to read your e-mail; maybe they just want some “metadata”—the equivalent of scanning the envelopes of physical letters—to see if your online activity is suspicious enough to warrant a closer look.  Well, then they can get what’s called a pen/trap order based on a mere certification to a judge of “relevance” to capture that information in realtime, but without having to provide any of those “specific and articulable facts.” Unless it’s information that would reveal your location—maybe because you’re e-mailing from your smartphone—in which case, well, the law doesn’t really say, but the Justice Department thinks a pen/trap order plus one of those 2703(d) orders will do, unless it’s really specific location information, at which point they get a warrant. If they want to get those records after the fact, it’s one of those 2703(d) orders—again, unless a non-public provider like your school or employer wants to volunteer them. Oh, unless it’s a counterterror investigation, and the FBI thinks your records might be “relevant” somehow, in which case they can get them with a National Security letter, without getting a judge involved at all.

Dizzy yet? Well, a movement launched today with the aim of dragging our electronic privacy law, kicking and screaming, into the 21st century: The Digital Due Process Coalition.  They’re pushing for a streamlined law that provides clear and consistent protection for sensitive information—the kind of common sense rules you’d have thought would already be in place.  If the government wants to read the contents of your letters, they should need a search warrant—regardless of the phase of the moon when an e-mail is acquired. If they want to track your location, they should need a warrant. And all that “metadata” can be pretty revealing in the digital age—maybe some stricter oversight is in order before they start vacuuming up all our IP logs.

Reforms like these are way overdue. You wouldn’t trust your most sensitive data to software code that hadn’t gone a few years without a security patch. Why would you trust it to legal code that hasn’t had a major patch in over two decades?

Picture Don Draper Stamping on a Human Face, Forever

Last week, a coalition of 10 privacy and consumer groups sent letters to Congress advocating legislation to regulate behavioral tracking and advertising, a phrase that actually describes a broad range of practices used by online marketers to monitor and profile Web users for the purpose of delivering targeted ads. While several friends at the Tech Liberation Front have already weighed in on the proposal in broad terms – in a nutshell: they don’t like it – I think it’s worth taking a look at some of the specific concerns raised and remedies proposed. Some of the former strike me as being more serious than the TLF folks allow, but many of the latter seem conspicuously ill-tailored to their ends.

First, while it’s certainly true that there are privacy advocates who seem incapable of grasping that not all rational people place an equally high premium on anonymity, it strikes me as unduly dismissive to suggest, as Berin Szoka does, that it’s inherently elitist or condescending to question whether most users are making informed choices about their privacy. If you’re a reasonably tech-savvy reader, you probably know something about conventional browser cookies, how they can be used by advertisers to create a trail of your travels across the Internet, and how you can limit this.  But how much do you know about Flash cookies? Did you know about the old CSS hack I can use to infer the contents of your browser history even without tracking cookies? And that’s without getting really tricksy. If you knew all those things, congratulations, you’re an enormous geek too – but normal people don’t.  And indeed, polls suggest that people generally hold a variety of false beliefs about common online commercial privacy practices.  Proof, you might say, that people just don’t care that much about privacy or they’d be attending more scrupulously to Web privacy policies – except this turns out to impose a significant economic cost in itself.

The truth is, if we were dealing with a frictionless Coaseian market of fully-informed users, regulation would not be necessary, but it would not be especially harmful either, because users who currently allow themselves to be tracked would all gladly opt in. In the real world, though, behavioral economics suggests that defaults matter quite a lot: Making informed privacy choices can be costly, and while an opt-out regime will probably yield tracking of some who would prefer not to be under conditions of full information and frictionless choice, an opt-in regime will likely prevent tracking of folks who don’t object to tracking. And preventing that tracking also has real social costs, as Berin and Adam Thierer have taken pains to point out. In particular, it merits emphasis that behavioral advertising is regarded by many as providing a viable business model for online journalism, where contextual advertising tends not to work very well: There aren’t a lot of obvious products to tie in to an important investigative story about municipal corruption. Either way, though, the outcome is shaped by the default rule about the level of monitoring users are presumed to consent to. So which set of defaults ought we to prefer?

Here’s why I still come down mostly on Adam and Berin’s side, and against many of the regulatory remedies proposed. At the risk of stating the obvious, users start with de facto control of their data. Slightly less obvious: While users will tend to have heterogeneous privacy preferences – that’s why setting defaults either way is tricky – individual users will often have fairly homogeneous preferences across many different sites. Now, it seems to be an implicit premise of the argument for regulation that the friction involved in making lots of individual site-by-site choices about privacy will yield oversharing. But the same logic cuts in both directions: Transactional friction can block efficient departures from a high-privacy default as well. Even a default that optimally reflects the median user’s preferences or reasonable expectations is going to flub it for the outliers. If the variance in preferences is substantial, and if different defaults entail different levels of transactional friction, nailing the default is going to be less important than choosing the rule that keeps friction lowest. Given that most people do most of their Web surfing on a relatively small number of machines, this makes the browser a much more attractive locus of control. In terms of a practical effect on privacy, the coalition members would probably achieve more by persuading Firefox to set their browser to reject third-party cookies out of the box than from any legislation they’re likely to get – and indeed, it would probably have a more devastating effect on the behavioral ad market. Less bluntly, browsers could include a startup option that asks users whether they want to import an exclusion list maintained by their favorite force for good.

On the model proposed by the coalition, individuals have to make affirmative decisions about what data collection to permit for each Web site or ad network at least once every three months, and maybe each time they clear their cookies. If you think almost everyone would, if fully informed, opt out of such collection, this might make sense. But if you take the social benefits of behavioral targeting seriously, this scheme seems likely to block a lot of efficient sharing. Browser-based controls can still be a bit much for the novice user to grapple with, but programmers seem to be getting better and better at making it more easy and automatic for users to set privacy-protective defaults. If the problem with the unregulated market is supposed to be excessive transaction costs, it seems strange to lock in a model that keeps those costs high even as browser developers are finding ways to streamline that process. It’s also worth considering whether such rules wouldn’t have the perverse consequence of encouraging consolidation across behavioral trackers. The higher the bar is set for consent to monitoring, the more that consent effectively becomes a network good, which may encourage concentration of data in a small number of large trackers – not, presumably, the result privacy advocates are looking for. Finally – and for me this may be the dispositive point – it’s worth remembering that while American law is constrained by national borders, the Internet is not. And it seems to me that there’s a very real danger of giving the least savvy users a false sense of security – the government is on the job guarding my privacy! no need to bother learning about cookies! – when they may routinely and unwittingly be interacting with sites beyond the reach of domestic regulations.

There are similar practical difficulties with the proposal that users be granted a right of access to behavioral tracking data about them.  Here’s the dilemma: Any requirement that trackers make such data available to users is a potential security breach, which increases the chances of sensitive data falling into the wrong hands. I may trust a site or ad network to store this information for the purpose of serving me ads and providing me with free services, but I certainly don’t want anyone who sends them an e-mail with my IP address to have access to it. The obvious solution is for them to have procedures for verifying the identity of each tracked user – but this would appear to require that they store still more information about me in order to render tracking data personally identifiable and verifiable. A few ways of managing the difficulty spring to mind, but most defer rather than resolve the problem, and add further points of potential breach.

That doesn’t mean there’s no place for government or policy change here, but it’s not always the one the coalition endorses. Let’s look  more closely at some of their specific concerns and see which, if any, are well-suited to policy remedies. Only one really has anything to do with behavioral advertising, and it’s easily the weakest of the bunch. The groups worry that targeted ads – for payday loans, sub-prime mortgages, or snake-oil remedies – could be used to “take advantage of vulnerable consumers.” It’s not clear that this is really a special problem with behavioral ads, however: Similar targeting could surely be accomplished by means of contextual ads, which are delivered via relevant sites, pages, or search terms rather than depending on the personal characteristics or browsing history of the viewer – yet the groups explicitly aver that no new regulation is appropriate for contextual advertising. In any event, since whatever problem exists here is a problem with ads, the appropriate remedy is to focus on deceptive or fraudulent ads, not the particular means of delivery. We already, quite properly, have rules covering dishonest advertising practices.

The same sort of reply works for some of the other concerns, which are all linked in some more specific way to the collection, dissemination, and non-advertising use of information about people and their Web browsing habits. The groups worry, for instance, about “redlining” – the restriction or denial of access to goods, services, loans, or jobs on the basis of traits linked to race, gender, sexual orientation, or some other suspect classification. But as Steve Jobs might say, we’ve got an app for that: It’s already illegal to turn down a loan application on the grounds that the applicant is African American. There’s no special exemption for the case where the applicant’s race was inferred from a Doubleclick profile. But this actually appears to be something of a redlining herring, so to speak: When you get down into the weeds, the actual proposal is to bar any use of data collected for “any credit, employment, insurance, or governmental purpose or for redlining.” This seems excessively broad; it should suffice to say that a targeter “cannot use or disclose information about an individual in a manner that is inconsistent with its published notice.”

Particular methods of tracking may also be covered by current law, and I find it unfortunate that the coalition letter lumps together so many different practices under the catch-all heading of “behavioral tracking.” Most behavioral tracking is either done directly by sites users interact with – as when Amazon uses records of my past purchases to recommend new products I might like – or by third party companies whose ads place browser cookies on user computers. Recently, though, some Internet Service Providers have drawn fire for proposals to use Deep Packet Inspection to provide information about their users’ behavior to advertising partners – proposals thus far scuppered by a combination of user backlash and congressional grumbling. There is at least a colorable argument to be made that this practice would already run afoul of the Electronic Communications Privacy Act, which places strict limits on the circumstances under which telecom providers may intercept or share information about the contents of user communications without explicit permission. ECPA is already seriously overdue for an update, and some clarification on this point would be welcome. If users do wish to consent to such monitoring, that should be their right, but it should not be by means of a blanket authorization in eight-point type on page 27 of a terms-of-service agreement.

Similarly welcome would be some clarification on the status of such behavioral profiles when the government comes calling. It’s an unfortunate legacy of some technologically atavistic Supreme Court rulings that we enjoy very little Fourth Amendment protection against government seizure of private records held by third parties – the dubious rationale being that we lose our “reasonable expectation of privacy” in information we’ve already disclosed to others outside a circle of intimates. While ECPA seeks to restore some protection of that data by statute, we’ve made it increasingly easy in recent years for the government to seek “business records” by administrative subpoena rather than court order. It should not be possible to circumvent ECPA’s protections by acquiring, for instance, records of keyword-sensitive ads served on a user’s Web-based e-mail.

All that said, some of the proposals offered up seem,while perhaps not urgent, less problematic. Requiring some prominent link to a plain-English description of how information is collected and used constitutes a minimal burden on trackers – responsible sites already maintain prominent links to privacy policies anyway – and serves the goal of empowering users to make more informed decisions. I’m also warily sympathetic to the idea of giving privacy policies more enforcement teeth – the wariness stemming from a fear of incentivizing frivolous litigation. Still, the status quo is that sites and ad networks profitably elicit information from users on the basis of stated privacy practices, but often aren’t directly liable to consumers if they flout those promises, unless the consumer can show that the breach of trust resulted in some kind of monetary loss.

Finally, a quick note about one element of the coalition recommendations that neither they nor their opponents seem to have discussed much – the insistence that there be no federal preemption of state privacy law. I assume what’s going on here is that the privacy advocates expect some states to be more protective of privacy than Congress or the FTC would be, and want to encourage that, while libertarians are more concerned with keeping the federal government from getting involved at all. But really, if there’s an issue that was made for federal preemption, this is it.  A country where vendors, advertisers, and consumers on a borderless Internet have to navigate 50 flavors of privacy rules to sell a banner add or an iTunes track does not sound particularly conducive to privacy, commerce, or informed consumer choice.

E-Verify: The Surveillance Solution

The federal government will keep data about every person submitted to the “E-Verify” background check system for 10 years.

At least that’s my read of the slightly unclear notice describing the “United States Citizenship Immigration Services 009 Compliance Tracking and Monitoring System” in today’s Federal Register. (A second notice exempts this data from many protections of the Privacy Act.)

To make sure that people aren’t abusing E-Verify, the United States Citizenship and Immigration Services Verification Division, Monitoring and Compliance Branch will watch how the system is used. It will look for misuse, such as when a single Social Security Number is submitted to the system many times, which suggests that it is being used fraudulently.

How do you look for this kind of misuse (and others, more clever)? You collect all the data that goes into the system and mine it for patterns consistent with misuse.

The notice purports to limit the range of people whose data will be held in the system, listing “Individuals who are the subject of E-Verify or SAVE verifications and whose employer is subject to compliance activities.” But if the Monitoring Compliance Branch is going to find what it’s looking for, it’s going to look at data about all individuals submitted to E-Verify. “Employer subject to compliance activities” is not a limitation because all employers will be subject to “compliance activities” simply for using the system.

In my paper on electronic employment eligibility verification systems like E-Verify, I wrote how such systems “would add to the data stores throughout the federal government that continually amass information about the lives, livelihoods, activities, and interests of everyone—especially law-abiding citizens.”

It’s in the DNA of E-Verify to facilitate surveillance of every American worker. Today’s Federal Register notice is confirmation of that.