Tag: privacy

GPS Tracking and a ‘Mosaic Theory’ of Government Searches

The Electronic Frontier Foundation trumpets a surprising privacy win last week in the U.S. Court of Appeals for the D.C. Circuit. In U.S. v. Maynard (PDF), the court held that the use of a GPS tracking device to monitor the public movements of a vehicle—something the Supreme Court had held not to constitute a Fourth Amendment search in U.S. v Knotts—could nevertheless become a search when conducted over an extended period.  The Court in Knotts had considered only tracking that encompassed a single journey on a particular day, reasoning that the target of surveillance could have no “reasonable expectation of privacy” in the fact of a trip that any member of the public might easily observe. But the Knotts Court explicitly reserved judgment on potential uses of the technology with broader scope, recognizing that “dragnet” tracking that subjected large numbers of people to “continuous 24-hour surveillance.” Here, the DC court determined that continuous tracking for a period of over a month did violate a reasonable expectation of privacy—and therefore constituted a Fourth Amendment search requiring a judicial warrant—because such intensive secretive tracking by means of public observation is so costly and risky that no  reasonable person expects to be subject to such comprehensive surveillance.

Perhaps ironically, the court’s logic here rests on the so-called “mosaic theory” of privacy, which the government has relied on when resisting Freedom of Information Act requests.  The theory holds that pieces of information that are not in themselves sensitive or potentially injurious to national security can nevertheless be withheld, because in combination (with each other or with other public facts) permit the inference of facts that are sensitive or secret.  The “mosaic,” in other words, may be far more than the sum of the individual tiles that constitute it. Leaving aside for the moment the validity of the government’s invocation of this idea in FOIA cases, there’s an obvious intuitive appeal to the idea, and indeed, we see that it fits our real world expectations about privacy much better than the cruder theory that assumes the sum of “public” facts must always be itself a public fact.

Consider an illustrative hypothetical.  Alice and Bob are having a romantic affair that, for whatever reason, they prefer to keep secret. One evening before a planned date, Bob stops by the corner pharmacy and—in full view of a shop full of strangers—buys some condoms.  He then drives to a restaurant where, again in full view of the other patrons, they have dinner together.  They later drive in separate cars back to Alice’s house, where the neighbors (if they care to take note) can observe from the presence of the car in the driveway that Alice has an evening guest for several hours. It being a weeknight, Bob then returns home, again by public roads. Now, the point of this little story is not, of course, that a judicial warrant should be required before an investigator can physically trail Bob or Alice for an evening.  It’s simply that in ordinary life, we often reasonably suppose the privacy or secrecy of certain facts—that Bob and Alice are having an affair—that could in principle be inferred from the combination of other facts that are (severally) clearly public, because it would be highly unusual for all of them to be observed by the same public.   Even more so when, as in Maynard, we’re talking not about the “public” events of a single evening, but comprehensive observation over a period of weeks or months.  One must reasonably expect that “anyone” might witness any of such a series of events; it does not follow that one cannot reasonably expect that no particular person or group would be privy to all of them. Sometimes, of course, even our reasonable expectations are frustrated without anyone’s rights being violated: A neighbor of Alice’s might by chance have been at the pharmacy and then at the restaurant. But as the Supreme Court held in Kyllo v US, even when some information might in principle be possible to obtain public observation, the use of technological means not in general public use to learn the same facts may nevertheless qualify as a Fourth Amendment search, especially when the effect of technology is to render easy a degree of monitoring that would otherwise be so laborious and costly as to normally be infeasible.

Now, as Orin Kerr argues at the Volokh Conspiracy, significant as the particular result in this case might be, it’s the approach to Fourth Amendment privacy embedded here that’s the really big story. Orin, however, thinks it a hopelessly misguided one—and the objections he offers are all quite forceful.  Still, I think on net—especially as technology makes such aggregative monitoring more of a live concern—some kind of shift to a “mosaic” view of privacy is going to be necessary to preserve the practical guarantees of the Fourth Amendment, just as in the 20th century a shift from a wholly property-centric to a more expectations-based theory was needed to prevent remote sensing technologies from gutting its protections. But let’s look more closely at Orin’s objections.

First, there’s the question of novelty. Under the mosaic theory, he writes:

[W]hether government conduct is a search is measured not by whether a particular individual act is a search, but rather whether an entire course of conduct, viewed collectively, amounts to a search. That is, individual acts that on their own are not searches, when committed in some particular combinations, become searches. Thus in Maynard, the court does not look at individual recordings of data from the GPS device and ask whether they are searches. Instead, the court looks at the entirety of surveillance over a one-month period and views it as one single “thing.” Off the top of my head, I don’t think I have ever seen that approach adopted in any Fourth Amendment case.

I can’t think of one that explicitly adopts that argument.  But consider again the Kyllo case mentioned above.  Without a warrant, police used thermal imaging technology to detect the presence of marijuana-growing lamps within a private home from a vantage point on a public street. In a majority opinion penned by Justice Scalia, the court balked at this: The scan violated the sanctity and privacy of the home, though it involved no physical intrusion, by revealing the kind of information that might trigger Fourth Amendment scrutiny. But stop and think for a moment about how thermal imaging technology works, and try to pinpoint where exactly the Fourth Amendment “search” occurs.  The thermal radiation emanating from the home was, well… emanating from the home, and passing through or being absorbed by various nearby people and objects. It beggars belief to think that picking up the radiation could in itself be a search—you can’t help but do that!

When the radiation is actually measured, then? More promising, but then any use of an infrared thermometer within the vicinity of a home might seem to qualify, whether or not the purpose of the user was to gather information about the home, and indeed, whether or not the thermometer was precise enough to reveal any useful information about internal temperature variations within the home.  The real privacy violation here—the disclosure of private facts about the interior of the home—occurs only when a series of very many precise measurements of emitted radiation are processed into a thermographic image.  To be sure, it is counterintuitive to describe this as a “course of conduct” because the aggregation and analysis are done quite quickly within the processor of the thermal camera, which makes it natural to describe the search as a single act: Creating a thermal image.  But if we zoom in, we find that what the Court deemed an unconstitutional invasion of privacy was ultimately the upshot of a series of “public” facts about ambient radiation levels, combined and analyzed in a particular way.  The thermal image is, in a rather literal sense, a mosaic.

The same could be said about long-distance  spy microphones: Vibrating air is public; conversations are private. Or again, consider location tracking, which is unambiguously a “search” when it extends to private places: It might be that what is directly measured is only the “public” fact about the strength of a particular radio signal at a set of receiver sites; the “private” facts about location could be described as a mere inference, based on triangulation analysis (say), from the observable public facts.

There’s also a scope problem. When, precisely, do individual instances of permissible monitoring become a search requiring judicial approval? That’s certainly a thorny question, but it arises as urgently in the other type of hypothetical case alluded to in Knotts, involving “dragnet” surveillance of large numbers of individuals over time. Here, too, there’s an obvious component of duration: Nobody imagines that taking a single photograph revealing the public locations of perhaps hundreds of people at a given instant constitutes a Fourth Amendment search. And just as there’s no precise number of grains of sand that constitutes a “heap,” there’s no obvious way to say exactly what number of people, observed for how long, are required to distinguish individualized tracking from “dragnet” surveillance.  But if we anchor ourselves in the practical concerns motivating the adoption of the Fourth Amendment, it seems clear enough that an interpretation that detected no constitutional problem with continuous monitoring of every public movement of every citizen would mock its purpose. If we accept that much, a line has to be drawn somewhere. As I recall, come to think of it, Orin has himself proposed a procedural dichotomy between electronic searches that are “person-focused” and those that are “data-focused.”  This approach has much to recommend it, but is likely to present very similar boundary-drawing problems.

Orin also suggests that the court improperly relies upon a “probabilistic” model of the Fourth Amendment here (looking to what expectations about monitoring are empirically reasonable) whereas the Court has traditionally relied on a “private facts” model to deal with cases involving new technologies (looking to which types of information it is reasonable to consider private by their nature). Without recapitulating the very insightful paper linked above, the boundaries between models in Orin’s highly useful schema do not strike me as quite so bright. The ruling in Kyllo, after all, turned in part on the fact that infrared imaging devices are not in “general public use,” suggesting that the identification of “private facts” itself has an empirical and probabilistic component.  The analyses aren’t really separate. What’s crucial to bear in mind is that there are always multiple layers of facts involved with even a relatively simple search: Facts about the strength of a particular radio signal, facts about a location in a public or private place at a particular instant, facts about Alice and Bob’s affair. In cases involving new technologies, the problem—though seldom stated explicitly—is often precisely which domain of facts to treat as the “target” of the search. The point of the expectations analysis in Maynard is precisely to establish that there is a domain of facts about macro-level behavioral patterns distinct from the unambiguously public facts about specific public movements at particular times, and that we have different attitudes about these domains.

Sorting all this out going forward is likely to be every bit as big a headache as Orin suggests. But if the Fourth Amendment has a point—if it enjoins us to preserve a particular balance between state power and individual autonomy—then as technology changes, its rules of application may need to get more complicated to track that purpose, as they did when the Court ruled that an admirably simple property rule was no longer an adequate criterion for identifying a “search.”  Otherwise we make Fourth Amendment law into a cargo cult, a set of rituals whose elegance of form is cold consolation for their abandonment of function.

Privacy-Protective Incentives and the Corporation

Many privacy advocates take corporate mendacity as a premise. From there, it’s easy to reach the conclusion that companies won’t protect privacy. For these privacy advocates, the fight for privacy is a fight against business.

In a sense, their conclusion about corporate behavior is true. Businesses won’t protect privacy beyond what they perceive consumers to want—doing so would just give away profits. Businesses will protect privacy when it’s a consumer demand they’ve promised to fulfill. Companies and their executives take considerable risks when they fail to meet that demand.

The exceptions are what get noticed, and Prudence Chan is an example for others to learn from. She was the head of Hong Kong cashless payment operator Octopus Holdings Ltd. until she resigned this week. Under her watch, the company sold data about users of the system for marketing purposes. Octopus will forfeit to charity the money it made on the sales. (Should be given to the affected users, but anyway…)

Hong Kong is debating whether its legal privacy protections are sufficient. But privacy officers and executives in companies around the world are looking at this story and considering how they would tolerate losing their jobs, status, and reputations. Their self-interest will drive them to protect privacy as demanded by their customers.

Sure, it might be nice for them to do it out of altruism or kindliness, but the result is the same.

Strip-Search Images Stored

The Transportation Security Administration will be sure to point out that it was not them—it was the U.S. Marshals Service—that kept ”tens of thousands of images recorded with a millimeter wave system at the security checkpoint of a single Florida courthouse,” according to Declan McCullagh of C|Net news.

The TSA has taken pains to make sure that their use of strip-search machines does not produce compromising images of the traveling public, but rules are made to be broken. How do you protect privacy in the use of a technology that is fundamentally designed to invade privacy?

What They Know Is Interesting—-But What Are You Going to Do About It?

The Wall Street Journal has stirred up a discussion of online privacy with its “What They Know” series of reports. These reports reveal again the existence and some workings of the information economy behind the Internet and World Wide Web. (All that content didn’t put itself there, y’know!)

The discussion centers around “tracking” of web users, particularly through the use of “cookies.” Cookies are little text files that web sites offer your browser when you visit. If your browser accepts the cookie, it will share the content of the text file back with that domain when you visit it a second time.

Often cookies have distinct strings of characters in them, so the site can recognize you. Sites use cookies to customize your experience. If you voted on a poll, for example, a cookie will cause the site to tell you how you voted. Cookies enable the “shopping cart” function in online stores.

Advertising networks use cookies to gather information about web surfers. Ads are embedded on the main sites people visit, just like the video above and the Amazon Kindle widget in the column on the right. They’re served by different servers than most of the content on the page. Embedded content acts as a sort of  ”third party” to the main transaction between web surfers and the sites they visit. Embedded content can offer cookies just like main sites do—they’re known as “third-party cookies.” 

A network that has ads on a lot of sites will recognize a browser (and by inference the person using it) when it goes to different web sites, enabling the ad network to get a sense of that person’s interests. Been on a site dealing with SUVs? You just might see an SUV ad as you continue to surf.

This is important to note: Most web sites and ad networks do not “sell” information about their users. In targeted online advertising, the business model is to sell space to advertisers—giving them access to people (“eyeballs”) based on their demographics and interests. It is not to sell individuals’ personal and contact info. Doing the latter would undercut the advertising business model and the profitability of the web sites carrying the advertising.

Some people don’t like this tracking. I think some feel it undignified to be a mere object of impersonal commerce (see Seger, Bob). Some worry that data about their interests will be used to discriminate wrongly against them, or to exclude them from information and opportunities they should enjoy. Excess customization of the web experience may stratify society, some believe. Tied to real identities, this data could fall into the hands of government and be used wrongly. These are all legitimate concerns, and I share some of them more, and some less, than others.

One I understand but dislike is the offense some people take at cookies for their “surreptitious” use. How many decades must cookies be integral to web browsing, and how many waves of public debate must their be about cookies before they lose their surreptitious cast? Cookies are just as surreptitious as photons and sound waves, which silently and invisibly carry data about you to anyone in the vicinity. We’d all be in a pretty tough spot without them.

Though cookies—and debate about their privacy consequences—have been around for a long time, many people don’t know even the basics I laid out above. They also don’t know that cookies are within the control of every web user.

As I testified to the Senate Commerce Committee last week, In the major browsers (Firefox and Internet Explorer), one must simply go to the “Tools” pull-down menu, select “Options,” then click on the “Privacy” tab to customize one’s cookie settings. In Firefox, one can decline to accept all third-party cookies, neutering the cookie-based data collection done by ad networks. In Internet Explorer, one can block all cookies, block all third-party cookies, or even choose to be prompted each time a cookie is offered.

Yes, new technologies make cookie control an imperfect protection against tracking, but that does not excuse consumers from the responsibility to exercise privacy self-help that will get at the bulk of the problem.

Some legislators, privacy advocates, and technologists want very badly to protect consumers, but much of what is called ”consumer protection” actually functions as an invitation for consumers to cede personal responsibility. People rise or fall to meet expectations, and consumer advocates who assume incompetence on the part of the public may have a hand in producing it, making consumers worse off. 

If a central authority such as Congress or the Federal Trade Commission were to decide for consumers how to deal with cookies, it would generalize wrongly about many, if not most, individuals’ interests, giving them the wrong mix of privacy and interactivity, for example. And it would leave consumers unprotected from threats beyond their jurisdiction (i.e. web tracking by sites outside the United States). Education is the hard way, and it is the only way, to get consumers’ privacy interests balanced with their other interests.

But perhaps this is a government vs. corporate passion play, with government as the privacy defender (… oh, nevermind). One article in the WSJ series has interacted with lasting anti-Microsoft sentiment to produce interpretations that business interests are working to undercut consumer privacy. Engineers working on a new version of Microsoft’s Internet Explorer browser thought they might set certain defaults to protect privacy better, but they were overruled when the business segments at Microsoft learned of the plan. Privacy “sabotage,” the Electronic Frontier Foundation called it. And a Wired news story says Microsoft “crippled” online privacy protections.

But if the engineers’ plan had won the day, an equal opposite reaction would have resulted when Microsoft “sabotaged” web interactivity and the advertising business model, “crippling” consumer access to free content. The new version of Microsoft’s browser maintained the status quo in cookie functionality, as does Google’s Chrome browser and Firefox, a product of non-profit privacy “saboteur” the Mozilla Foundation. The “business attacks privacy” story doesn’t wash.

This is not to say that businesses don’t want personal information—they do, so they can provide maximal service to their customers. But they are struggling to figure out how to serve all dimensions of consumer interest including the internally inconsistent consumer demand for privacy along with free content, custom web experiences, convenience, and so on.

Only one thing is certain here: Nobody knows how this is supposed to come out. Cookies and other tracking technologies will create legitimate concerns that weigh against the benefits they provide. Browser defaults may converge on something more privacy protective. (Apple’s Safari browser rejects third-party cookies unless users tell it to do otherwise.) Browser plug-ins will augment consumers’ power to control cookies and other tracking technologies. Consumers will get better accustomed to the information economy, and they will choose more articulately how they fit into it. 

What matters is that the conversation should continue. If you’ve read this far, you’re better equipped to participate in it, and to take responsibility for your own privacy.

Do so.

Compare and Contrast

Fourth Amendment:

“The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”

Supreme Court (Katz v. U.S.):

“[S]earches conducted outside the judicial process, without prior approval by judge or magistrate, are per se unreasonable under the Fourth Amendment—subject only to a few specifically established and well delineated exceptions.”

Washington Post:

“The Obama administration is seeking to make it easier for the FBI to compel companies to turn over records of an individual’s Internet activity without a court order if agents deem the information relevant to a terrorism or intelligence investigation.”

The Information Economy Stops Evolving Today

That would be the message if a bill introduced in Congress this week were to pass. H.R. 5777 is the “Building Effective Strategies To Promote Responsibility Accountability Choice Transparency Innovation Consumer Expectations and Safeguards Act” or the “BEST PRACTICES Act.” If acronyms were a basis for judging legislation, it should be widely hailed as a masterwork.

But its substance is concerning, to say the least. The bill’s scope is massive: Just about every person or business that systematically collects information would be subject to a new federal regulatory regime governing information practices. By systematic, I mean: If you get a lot of emails or run a website that collects IP addresses (and they all do), you’re governed by the bill.

There’s one exception to that: The bill specifically exempts the government. What chutzpah our government has to point the finger at us while its sprawling administrative data collection and surveillance infrastructure spiral out of control.

Reviewing the bill, I found it interesting to consider what you get when you take a variety of today’s information “best practices” and put them into law. Basically, you freeze in place how things work today. You radically simplify and channel all kinds of information practices that would otherwise multiply and variegate.

I spoke about this yesterday with CNet News’ Declan McCullagh:

Harper says it reminds him of James C. Scott’s book, “Seeing Like A State.” Governments and big corporations “radically simplify what they oversee to make it governable,” he said. “In things like forestry and agriculture, this has had devastating environmental effects because ecosystems don’t function when you eliminate the thousands of ‘illegible’ relationships and interactions. This is Seeing Like a State for the information economy.”

Give people remedies when they’re harmed by information practices, and then leave well enough alone. There’s no place for a list of “must-do’s” and “can’t-do’s” that choke our nascent information economy—especially not coming from a government that doesn’t practice what it preaches.

Stop ‘n’ Frisk Databases

Via Adam Serwer, New York governor David A. Paterson is expected to sign a bill today doing away with data collection on people the police stop and question, but who have done nothing wrong.

The Transportation Security Adminstration’s “SPOT” program—recently the subject of a scathing Government Accountability Office critique—does similar data collection about innocent people.

From late May 2004 through August 2008, “behavior detection officers” referred 152,000 travelers to secondary inspection at airports. Of those, TSA agents referred 14,000 people to law enforcement, which resulted in approximately 1,100 arrests. None had links to terrorism or any threat to aviation.

The data TSA collects “when observed behaviors exceed certain thresholds”—that is, when a traveler garners TSA suspicion—includes:

  • first, middle, and last names
  • aliases and nicknames
  • home and business addresses and phone numbers
  • employer information
  • identification numbers such as Social Security Number, drivers license number or passport number
  • date and place of birth
  • languages spoken
  • nationality
  • age
  • sex
  • race
  • height and weight
  • eye color
  • hair color, style and length
  • facial hair, scars, tattoos and piercings, clothing (including colors and patterns) and eyewear
  • purpose for travel and contact information
  • photographs of any prohibited items, associated carry-on bags, and boarding documents
  • identifying information for traveling companion.