Tag: privacy

Three Keys to Surveillance Success: Location, Location, Location

The invaluable Chris Soghoian has posted some illuminating—and sobering—information on the scope of surveillance being carried out with the assistance of telecommunications providers.  The entire panel discussion from this year’s ISS World surveillance conference is well worth listening to in full, but surely the most striking item is a direct quotation from Sprint’s head of electronic surveillance:

[M]y major concern is the volume of requests. We have a lot of things that are automated but that’s just scratching the surface. One of the things, like with our GPS tool. We turned it on the web interface for law enforcement about one year ago last month, and we just passed 8 million requests. So there is no way on earth my team could have handled 8 million requests from law enforcement, just for GPS alone. So the tool has just really caught on fire with law enforcement. They also love that it is extremely inexpensive to operate and easy, so, just the sheer volume of requests they anticipate us automating other features, and I just don’t know how we’ll handle the millions and millions of requests that are going to come in.

To be clear, that doesn’t mean they are giving law enforcement geolocation data on 8 million people. He’s talking about the wonderful automated backend Sprint runs for law enforcement, LSite, which allows investigators to rapidly retrieve information directly, without the burden of having to get a human being to respond to every specific request for data.  Rather, says Sprint, each of those 8 million requests represents a time when an FBI computer or agent pulled up a target’s location data using their portal or API. (I don’t think you can Tweet subpoenas yet.)  For an investigation whose targets are under ongoing realtime surveillance over a period of weeks or months, that could very well add up to hundreds or thousands of requests for a few individuals. So those 8 million data requests, according to a Sprint representative in the comments, actually “only” represent “several thousand” discrete cases.

As Kevin Bankston argues, that’s not entirely comforting. The Justice Department, Soghoian points out, is badly delinquent in reporting on its use of pen/trap orders, which are generally used to track communications routing information like phone numbers and IP addresses, but are likely to be increasingly used for location tracking. And recent changes in the law may have made it easier for intelligence agencies to turn cell phones into tracking devices.  In the criminal context, the legal process for getting geolocation information depends on a variety of things—different districts have come up with different standards, and it matters whether investigators want historical records about a subject or ongoing access to location info in real time. Some courts have ruled that a full-blown warrant is required in some circumstances, in other cases a “hybrid” order consisting of a pen/trap order and a 2703(d) order. But a passage from an Inspector General’s report suggests that the 2005 PATRIOT reauthorization may have made it easier to obtain location data:

After passage of the Reauthorization Act on March 9, 2006, combination orders became unnecessary for subscriber information and [REDACTED PHRASE]. Section 128 of the Reauthorization Act amended the FISA statute to authorize subscriber information to be provided in response to a pen register/trap and trace order. Therefore, combination orders for subscriber information were no longer necessary. In addition, OIPR determined that substantive amendments to the statute undermined the legal basis for which OIPR had received authorization [REDACTED PHRASE] from the FISA Court. Therefore, OIPR decided not to request [REDACTED PHRASE] pursuant to Section 215 until it re-briefed the issue for the FISA Court. As a result, in 2006 combination orders were submitted to the FISA Court only from January 1, 2006, through March 8, 2006.

The new statutory language permits FISA pen/traps to get more information than is allowed under a traditional criminal pen/trap, with a lower standard of review, including “any temporarily assigned network address or associated routing or transmission information.” Bear in mind that it would have made sense to rely on a 215 order only if the information sought was more extensive than what could be obtained using a National Security Letter, which requires no judicial approval. That makes it quite likely that it’s become legally easier to transform a cell phone into a tracking device even as providers are making it point-and-click simple to log into their servers and submit automated location queries.  So it’s become much more  urgent that the Justice Department start living up to its obligation to start telling us how often they’re using these souped-up pen/traps, and how many people are affected.  In congressional debates, pen/trap orders are invariably mischaracterized as minimally intrusive, providing little more than the list of times and phone numbers they produced 30 years ago.  If they’re turning into a plug-and-play solution for lojacking the population, Americans ought to know about it.

If you’re interested enough in this stuff to have made it through that discussion, incidentally, come check out our debate at Cato this afternoon, either in the flesh or via webcast. There will be a simultaneous “tweetchat” hosted by the folks at Get FISA Right.

Online Privacy and the Commerce Clause

I fear that with the PATRIOT Act on the brain, I’ve been remiss in continuing the colloquy on behavioral ads and privacy regulation that I’d been having with Jim Harper—who flattered me by responding in a long and thoughtful essay a couple weeks back. Because there’s so much interesting stuff there, I hope he won’t mind if I restrict myself to the first part of his reply here, in the interest of making this all a bit more digestible to those whose fascination with the topic may not be quite as consuming as ours. I’ll consider briefly the constitutional issue Jim raises, and turn to some of the specifics of the issue—and the relative merits of the common law alternative—in another post.

So like every good dorm room bull session, we begin in the weeds of  policy and quickly find ourselves breathing the rarefied air of constitutional theory. Supposing for the moment that we thought it were a good idea on policy grounds, would it be within the power of Congress to set ground rules for online advertisers who gather personal data from Web browsers? Recall that there are two particular rules that I’ve said I’d be tentatively open to, but which Jim rejects: a requirement of notice when information is being collected (say via a small link from the adspace to a privacy policy) and a rule establishing that privacy policies are enforceable, so that individual users can sue for damages if a company knowingly  violates its stated policy (thus far, courts have not generally found these to be binding). Does this fall within the power to “regulate commerce … among the several states”? I think so. I’ll start with what I hope will be some uncontroversial arguments and go from there.

So first, let’s grant that there’s one type of “original intent” that everyone ought to care about, whatever their more general interpretive stance: what Ronald Dworkin calls the linguistic intent of the Framers. That is, if words like “commerce” and “regulate” had narrower meanings in 1787 than they do today, we must, of course, read them now in that light: “Commerce” means actual interstate traffic in goods and services, rather than economic activity more generally, and “regulation” is centrally about establishing uniform rules and procedures.  With these appropriately narrowed readings in mind, I think it’s still a slam-dunk that online ads are covered.

There are, in fact, at least three different senses in which behavioral ads might be classed as interstate commerce. First, the purchase of the ad space itself is obviously a commercial transaction—frequently though not necessarily between entities in different states—and there’s a reasonable question of whether a host site with posted privacy policy is implicitly committed to applying that policy as a condition on ad space sold to third parties. The ads themselves will typically propose a commercial transaction, and in a more direct way than other ads are, can plausibly be seen as the first step in the transaction itself, as clicking on the ad will often bring you directly to a page where you can complete the purchase it recommends. Finally, the personal and behavioral user data collected is itself a valuable commodity, and many sites function with a pretty explicit informational quid pro quo: You will receive access to our content in exchange for registering and providing us with certain data. Since the Internet is borderless, most sites will be getting most of their traffic from people located in different states or countries, and even narrowly state-focused sites are likely to have substantial border-crossing traffic. So on a pretty straight reading of the constitutional language, I find very little reason to doubt that Congress may set uniform default rules for these interstate transactions, rather than leaving it to a patchwork of state rules.

Now, Jim’s reason for questioning this seems to be that the primary concern of the Framers was to prevent states from creating trade barriers. That may be, but if we skip ahead to Article 1, Section 10, we find that Congress knew perfectly well how to enact general and purely prohibitory bans on such shenanigans  using more apt “no state shall” language. Instead, they used precisely the same language for interstate commerce as they did for international commerce, where history suggests that the Framers (many of them steeped in the mercantilist economic theories of the day) had been above all concerned to preserve the ability to erect protectionist trade barriers. So we’re left with a choice between ascribing to the Framers a frankly stunning level of linguistic incompetence or supposing that the Constitution actually does grant the affirmative power that a facial reading suggests.

Needless to say, this does not require us to adopt the post–New Deal reading that places anything with the least potential influence on economic activity under Congressional purview. But we’re pretty close to the core here. Indeed, one of the early cases I know Jim considers a lodestone for the “no trade barriers” reading, Gibbons v. Ogden, involves a congressional grant of a license to operate steamboats. The court found that this superseded the monopoly New York had sought to grant another steamboat operator, which fits Jim’s point to an extent, but it’s crystal clear from that (1824) ruling that the power of Congress here is a broad authority to grant or withhold a privilege to operate interstate vessels, and establish conditions on such vessels, including restrictions on ownership and personnel. It seems to me you’d have to get awfully creative to read the clause in a way that authorizes that kind of authority over an “instrumentality” of commerce (water navigation) but forbids Congress from specifying the kind of notice a merchant must provide when initiating an actual interstate commercial transaction.

A slightly more controversial suggestion: When the specific substantive intent of the Framers is not explicitly embedded in the Constitution’s language—by which I mean, the specific use they thought a wise Congress would make of enumerated powers in light of contemporary economic theories, whether liberal or mercantilist—I am not inclined to give it very great weight. Or more bluntly, when the legal language is abstract, I don’t think we’re bound by an original conception of how or where it applied in specific cases—to the extent such a consideration is even intelligible when we’re talking about Internet advertising. Manifestly, very few people at the time of the passage of the Fourteenth Amendment believed that the abstract guarantee of “equal protection” entailed a substantive right of black children to attend public schools the states restricted to whites. But insofar as what they wrote into law was the abstract guarantee, I don’t think we’re required to care what they believed. Our modern reading should be constrained by the original sense of the words used, and to some extent by the original structural purpose served (translated as necessary). But in specific application—whether privacy rules for online ads are encompassed within “regulation” of “commerce”—then even if you pulled out the Ouija board and got a personal verdict from James Madison, it would just be one more opinion.

Finally, and maybe most controversially: What kind of recommendations should we make in a world where our preferred interpretation of the Constitution lost the fight a long time ago? If the question is what we should recommend to judges, presumably we want to recommend that they start shifting back in the direction of a reading we regard as better justified. But what about when, as Jim imagines, we’re advising legislators? Should we only recommend what we believe to be authorized by what we hold to be the best reading of the Constitution, or will it sometimes make sense to endorse legislation that is plainly allowed by the current regnant interpretation, but that might be outside the scope of the interpretation we regard as superior? I think it will, partly for theoretical, and partly for pragmatic reasons.

At a practical level, both legislators and citizens widely believe Congress to have broader policy discretion than most of the authors here. So very generally speaking, I don’t think it serves limited government to refrain from weighing in on the relative merits of policy options that wouldn’t be on the table at all if our arguments had fared better at the meta-level. (Recall the old joke about the principled pacifist answer to how to respond to World War II: Don’t sign the Treaty of Versailles!) Now, on this particular question it’s not a sure thing that Congress or the FTC will act, and maybe “hands off” is the best advice to give. But there are plenty of areas where there’s no realistic chance that Congress is going to abstain altogether, even if we think that’s what the best interpretation of the Constitution requires. In those cases, I think it’s at least sometimes appropriate to flag the meta objection and then say something about the policy merits. Obviously there are limits—I don’t expect I’ll ever express a view on the “best” way to run a torture chamber—but there are plenty of issues where it seems perverse for the people most concerned with limited government to sit out the day-to-day debates and focus on getting Wickard v. Filburn overturned, glad as I am that there are folks hammering that.

That dovetails with the theoretical reason, which has to do with the broader question of why constitutional principles are binding on us at all. I assume it is not because the Founders, brilliant though they were, enjoyed some divine right of command that the inheritors of their institutions are compelled to obey. Partly it’s that the principles embedded in the Constitution are good ones, but a substantial piece of the answer, I think, is that they provide a stable framework within which we conduct our political and private lives. Judges give weight to stare decisis even when they think the case at the fountainhead of a line of precedent was poorly decided, in part because the legitimacy and authority of law are to a great extent a function of its predictability, of the way it allows us to take actions and make agreements and know pretty much what the legal consequences will be, however much else may remain unpredictable. Constitutional restraints do this one level up, establishing (albeit roughly) a domain of legal variation over the longer term. This is  not, for what it’s worth, wacky postmodern Critical Legal Studies stuff; it’s an extrapolation from Hayek. To imagine that you can remake a society’s institutions wholesale—even if your guide is the best interpretation of a founding document, and even if you’re pretty sure that interpretation held sway a couple centuries ago—is the fallacy of constructivist rationalists.

Now, I think the right account of why we should regard the Constitution as binding starts with considerations along these lines, but this has the (perhaps unfortunate) consequence that even if you had a super-awesome unanswerable argument for why the Constitution mandates libertopia, at least when read properly absent the accretions of precedent, you still wouldn’t have an argument that judges, legislators, and government officials must all start acting on this understanding as of tomorrow. What you’d have is a good starting point for a much more gradual process of paring government back down. Not, to be clear, because I think the Constitution “means whatever the Supreme Court says it does”—that would be incoherent, since the court’s practice is unintelligible, and its legitimacy illusory, unless we assume there’s an independent meaning for them to strive toward.  But an “independent” meaning can be located in a community of interpretation and practice that extends beyond the framing generation. By analogy: If I want to use language “correctly” to communicate, I don’t get to just assign whatever meanings I like to words. It’s even possible to make a strong argument that the majority of speakers at a particular historical moment are using a word—like “decimate” or “hopefully” or “brutalize”—improperly. But neither does it mean that the first person to coin the term gets to specify its legitimate uses forever. And, in fact, anyone who insisted on using “decimate” to mean only “reduce by ten percent” would probably find his attempts at communication misfiring badly. To say that meaning is necessarily public and independent—consult Hayek’s cousin Wittgenstein here—does not require a baptismal view of meaning. Or at any rate, whether it does or not depends on the function your interpretive practice serves.

So yeah, that’s all pretty far removed from our original discussion—and I’m hoping far enough below the fold that it doesn’t put me on the wrong end of another dozen arguments with colleagues. I’ll do another post later this week where I actually get to the policy question, and some potent objections that both Jim and Tim Lee have raised.

On Notice

I’m delighted that Julian Sanchez has joined us at Cato. He’s as smart as they come. I’m equally pleased that I’ll have an intellectual sparring partner here on some of my issues from time to time. I encouraged Julian to share here some of what we had been discussing about privacy notices via email.

There are lots of dimensions to our conversation, but I’ll summarize it as follows: Can federal statutes protect Web surfers’ privacy? (We’re talking about privacy from other private actors, not privacy from government. Government self-control expressed in federal statutes could obviously improve privacy from government.)

Julian can see a couple of statutes helping: a requirement that third-party trackers provide a link explaining what they do, and a requirement that privacy policies be enforceable.

I think the former is a fine thing if people want it. I’m dubious about its benefits, though, and wouldn’t mandate it. The latter is the outcome I prefer—strongly!—but a federal statute is the wrong way to get there.

As you read Julian’s comment and mine, I think the divide you’ll see is a common one among libertarians. Some of us love efficiency and wealth creation, which is such a delightful product of free markets. And some of us love freedom for its own sake, not just for free markets, efficiency, and wealth creation. We’ll give up a little efficiency and wealth (in the short term) to protect liberty.

I’ll discuss the topic in the order I would as a legislative staffer (which I was), treating first the subject Julian left to last: whether the federal government has a constitutional role.

Is It Constitutional?

As we all know, the U.S. constitution gave the federal government limited powers, reserving the rest to the states and people. This was for a number of reasons, including contemporary experience with the imperiousness of a remote government.

Technology and communications might eventually change things, but so far nothing has overcome the proclivity of remote powers to misunderstand their subjects and act badly toward them, ignorant of their needs. (I’ll discuss how little the federal government—or anyone—knows about consumers’ privacy interests below.)

The constitution did give the federal government power to “regulate commerce with foreign nations, and among the several states, and with the Indian tribes.” Under the articles of confederation, the states had fallen into trade protectionism, and the purpose of this power was to suppress this form of parochialism.

It’s a straightforward inference from the grant of like authority over international, interstate, and tribal commerce that this was not a general grant to regulate all things we today call “commercial.” It was authority to make regular the buying and selling of things across jurisdictional lines. The Supreme Court allowed the limits on the commerce power to be breached in the New Deal era.

Has the constitutional design of our government been rendered quaint by the emergence of national markets for goods and services? By that international marketplace for goods, services, and ideas that we call the Internet?

No. Because the constitution and the commerce clause were not a commercial charter. They were the design of what we would today call a “political economy.” The framers designed in competition for power among branches of the federal government and between the states and federal government. Government powers contesting against each other would leave the people more free. I won’t recite how federalism works in every detail, but I encourage people to familiarize themselves with its genius.

National markets and the Internet do weaken federalism in some respects. They make it harder for businesses to exit states that make themselves unfriendly through high taxes, poor services, and inefficient regulation. Thus it is harder to hold state officials accountable. But this is no argument for removing their power to a more remote level of government, from which consumers and businesses have no power of exit save leaving the country! Establishing federal commercial rules would cut tendons in the political economy that the constitution created.

And with the whole country under the same rule, there would be almost no way to learn whether a better rule is preferable. A national rule established in ignorance of what the future holds (and they all are) stands a decent chance of being inefficient, unjust, or ill-adapted to new developments in technology, consumer demand, or business models. But there’s no corrective mechanism. Short-term efficiency gained by stabilizing expectations comes at the cost of long-term sclerosis.

There are ways consistent with the constitution to harmonize state laws while leaving states free to innovate in response to change, I hasten to point out.

The “national markets” argument for federal preemption is supported by many efficiency-oriented libertarians. But as markets globalize, the argument will support global regulation equally well. This is something that many of those same libertarians oppose. Perhaps they believe that American politicians can be trusted but not foreign ones—I don’t know, and I don’t see much difference between them. There are many good reasons for preferring local or personal regulation to national or global.

Does Notice Work?

But let’s assume the federal government is going to act in this area, and that we have been assigned to write a statute that promotes the privacy of Web-surfers. Does requiring third-party trackers to provide notice do that? I don’t think so.

First, let’s be more precise about the problem we’re trying to fix. Julian says that there exists a set of consumer expectations that are not being met. “Empirically,” he says, most people don’t expect to be monitored all the time unless they’ve been explicitly warned otherwise. I take Julian’s point to be that this lack of notice is depriving them of information they need to exercise privacy-protective self-help. The result is less privacy than consumers would have with notice and lower consumer welfare.

I haven’t seen the research on which Julian bases his statement about consumer expectations, and I don’t know of any public opinion research that has overcome the deficits Solveig Singleton and I identified in our 2001 paper on privacy polling.

If people have these expectations, they’re counterfactual. I’m willing to be corrected if it’s no longer true, but I believe that most servers record and store the IP addresses from which they have received requests for data, monitoring and archiving records of all visitors in at least an elemental sense.

I don’t think consumers’ expectations are terribly clear. Expectations are still being set, and my recent post about the White House’s cookie policy was a volley in the battle to set them.

My preference is for consumers to be empowered and required to protect themselves from cookie-based tracking that they don’t want. I believe consumers are responsible for their choices in computers, software, Internet connection, and security. No computer is ever “coaxed” into releasing information if it hasn’t been set up to allow it.

Protection against unwanted data release isn’t easy in a changing technology environment, but Internet users have a great deal of help in making their choices, and they will get better at it if their well-being requires it. The alternative is nannying and regulation of the type most libertarians object to.

In his post, Julian appears to agree that people shouldn’t expect privacy in messages posted to public fora but then switches the subject slightly. Drawing an analogy between Web surfing and a changing room at a clothing store, he suggests that much online behavior is like undressing in a cordoned-off area on someone else’s premises. Decency (and, Julian says, law) requires notice when people might be observed in that setting.

I fear that Julian has lumped a lot of very different kinds of interaction together, making the online world legible for the purpose of writing a uniform rule about how it should work. Planners must do away with complexity, of course, but that is why planning fails so badly compared to the self-organizing done in markets and reflected in common law rules.

Again, given the thousands of different contexts of online communication, I don’t think people’s expectations are settled or static. People’s expectations when clicking from site to site sweep across a much wider, newer landscape than when they are buying a toaster, in which expectations truly are relatively settled.

But assuming that people do have the expectations Julian says, will notice that their expectations are not being met make them aware of it? Will it empower them to protect their privacy? Our experience with first-party tracking suggests otherwise.

In the late 1990s, the U.S. commercial Internet adopted a strong custom of posting privacy policies. It’s worth noting that this was adopted without government coercion (though there was the threat of coercion—in our business, we never get controlled experiments). Well-intended though this was, it has not spawned a culture of privacy.

What evidence there is suggests that people don’t read privacy policies. When people choose online service providers, they don’t compare the written policies of different providers. Their sources of information instead include news stories, friends, blogs—a marketplace of information much more robust than these privacy policies.

Consumers do adjust the online products and providers they use, mostly by shunning what they find scary. Firms adjust their privacy practices in light of their own and other firms’ flubs. I think much or all of this would happen regardless of whether there was a privacy notice on every homepage. (Again, we lack controlled experiments.)

The few privacy advocates who read notices—and even many privacy advocates don’t bother—routinely complain about how permissive they are. Many notices say, essentially, “We care about your privacy a lot! And we do whatever we please with the information you give us!”

Consumers do not seem willing to punish them for having such information policies. One possibility is that consumers don’t care about privacy in many circumstances. That’s not crazy. Another is that notices don’t inform. There’s a good chance that consumers take the existence of notice as an indication that they are being accorded privacy, regardless of what’s in the policy. Privacy notices may fool consumers into thinking they’re protected when they’re not.

In the main I can’t say our online culture is necessarily shaping up wrongly, but the presence of notice about first-party tracking has not made consumers much better off in terms of privacy. It may have given information to advocacy groups and watchdogs that they otherwise wouldn’t have gotten so easily, but links on every homepage are just ritual. The privacy conversation happens elsewhere. I don’t think this ritual should be extended and deepened with more notice about more things.

Julian is not alone in thinking it should, of course. There are many who would impose comprehensive notice regimes or refine the ones we’ve got. Many of these people confuse privacy notices with privacy, and privacy laws with privacy. I don’t think mandating privacy notices bears up as an effective consumer protection.

Easier Said Than Done

I also think there are a lot of practical problems and costs to mandating privacy notices.

As so many have before him, Julian asks for an “ordinary-language explanation” of what is going on. But we don’t yet have a reliable and well-understood language for describing all the things that happen with data. Much less do we know what features of data use are salient to consumers. Many blame corporate obfuscation for long, confusing privacy policies, but just try describing what happens to information about you when you walk down the street and the difficulty with writing privacy policies become clear.

Then there’s avoidance. A lot of tracking is fungible, and new innovations in tracking are sure to come, both on the technical side and the business side. If a notice regime were to stir consumer opposition to third-party tracking, the tracking could well shift back to first parties who could then serve up the products of tracking as third parties do now. What will the rule have done, then, but distort and raise costs in information markets without improving privacy?

The answer when notice fails to protect privacy, of course, is to ban tracking altogether, a goal that I think some privacy advocates maintain sub rosa. This would undercut the free-content Internet, which is supported by advertising, and which uses tracking for targeting. Mandating notice is a step toward giving people privacy they may not want while taking away content they do.

Julian would propose an elegant rule, of course, but would it survive the trip through a legislature? We have experience there, too, with California’s privacy policy mandate. Does it look simple to you? As statutes go, it actually is. (California Business and Professions Code Section 22575-22579, you’ve just been damned with faint praise…!)

There are plenty of seams in it, though. Take what it means to “conspicuously post” a privacy notice—a defined term in the legislation. Last year, a brouhaha broke out over the meaning of “conspicuously post” with regard to Google’s privacy policies. It would have been funny if it weren’t so stupid. By the reckoning of many, Google was failing to “conspicuously post” its privacy policy by failing to put a link to it on its homepage.

Google, of course, is a search engine. It helped bring about the end of the portal era, during which we went to sites with great masses of links. Google works hard to maintain a clean, crisp, “anti-portal” homepage, and its privacy policy was and is easily found via search. But it could not withstand the pressure to post a privacy link on its home page. Today, more people probably click on that link by mistake than on purpose.

Is html the last protocol? How do you implement a link to a privacy notice on services of the future that don’t necessarily use the Web? How much money and time should a revolutionary new Internet device or service using a new protocol spend arguing to the Federal Trade Commission that it should be allowed to proceed?

Of course, every new regulation is wafer-thin. I don’t oppose them because each and every one of them lack any merit—only because the entirety of them do more harm than alternatives would. So let’s now turn to my preferred alternative: common law.

Common Law Rules Rule

Julian analogizes his third-party notice rule to the common law contract doctrine of implied warranty, of which I approve because it has shown over generations to be a fair and efficient rule. Things sold as toasters are supposed to toast bread. If you’re selling a toaster that doesn’t do that, and if you don’t make that clear, you violate a term implied by common law into sales contracts. But rules that haven’t been tested and proven over time like this don’t deserve to be laws.

Until recently (in historical terms), all law was common law. People made up the laws that suited their needs and passed them from generation to generation. Julian’s description of common law as “parasitic” on social practice is inapt. Social practice and common law are on a continuum. When a custom is so deeply ingrained and wrapped up with the rights we accord people, we treat that custom as law and penalize or punish deviations through coercive means. (I don’t think there should be a lot of law, of course.)

With our habit for personality cults, we like to think that Hammurabi, Justinian, and Napoleon were “law-givers,” but what they did was write down law that already existed in the practices of the people. (In an age of mass illiteracy, it’s doubtful that writing something down did much to affect people’s behavior.)

When civil law countries started writing summaries of their law, they took one road: expert lawmakers would decide the rules that govern society.  Common law countries went down another path, in which courts formalized the law discovery process but did not seek to supplant it.

Legislatures in both systems today are typically bodies of non-experts—neither legal experts nor subject matter experts—who deign to script how society should work rather than letting society decide for itself. As we see daily in Washington, D.C., the result is not a system that gravitates toward fairness or efficiency, but a series of compromises dividing goodies (money and rules) among the best-represented interests in society, the rest of the population be damned.

No legislature today, and for all his smarts not Julian, has the knowledge needed to write an appropriate rule about what (if anything) people should be told when they go to a Web site or click on a link. With users having the ability to discern what a link does, and having knowledge that the Internet is a big copying machine, I think that the most efficient, fair, and protective rule will probably be caveat clickor. But I am willing to wait and see if that is best.

If consumers want to know something before they click, they are well equipped to let Web sites know their preferences. Let social customs evolve to meet the needs of consumers in light of ongoing multi-layered change in the Internet and its use.

“But doesn’t an ever-changing Internet make the case for some modest regulation? The Internet is so new! We really must have baseline rules or we’ll have costly disorder! We pay the price every day for our failure to regulate because people aren’t going online like they would if they were confident of their privacy!”

These are arguments regulators and social engineers make to sound “market friendly.” The problem is that they rest on the same unsupported assertions that Julian has made about privacy expectations, notice, human wants, and the interactions among these things.

There is plenty of surmise but little good evidence that people are staying offline because of privacy concerns. There is little understanding of how to get people to protect their privacy. Notice is at best an unproven technique, more probably a waste of time.

You can regulate in haste, but you won’t necessarily achieve anything. And it’s not the job of legislators—certainly not Congress—to make the privately owned and operated Internet more user-friendly.

Julian has it backward to suggest that statutes should move in to stabilize expectations when technology is fast-changing. That’s precisely the wrong time to congeal the rules.

When existing law doesn’t serve new conditions, custom followed by common law slowly discover adaptations to satisfy them. It takes some time—and it’s time that should be taken. The alternative, statutory law, has no corrective function to undo regulations that fail to suit later circumstances.

The notice rule Julian proposes is planning of the type we deplore when it comes to industrial production, the layout of towns and cities, transportation, energy, educational curricula, and so on. Why support it when it comes to online rules of engagement?

In my withering, fun attack on Julian’s notice rule, I’ve left out whether privacy notices should be enforceable. They should. As contract terms. I look forward to that rule being adopted at common law. I regret it each time the Federal Trade Commission disrupts the conditions that would establish that rule. And I’m eager to learn how society will solve the problem of damages.

A Bizarre Privacy Indictment

Page one of today’s Washington Times—above the fold—has a fascinating story indicting the White House for failing to disclose that it will collect and retain material posted by visitors to its pages on social networking sites like Facebook and YouTube. The story is fascinating because so much attention is being paid to it. (It was first reported, as an aside at least, by Major Garrett on Fox News a month ago.)

The question here is not over the niceties of the Presidential Records Act, which may or may not require collection and storage of the data. It’s over people’s expectations when they use the Internet.

Marc Rotenberg, president of the Electronic Privacy Information Center, said the White House signaled that it would insist on open dealings with Internet users and, in fact, should feel obliged to disclose that it is collecting such information.

Of course, the White House is free to disclose or announce anything it wants. It might be nice to disclose this particular data practice. But is it really a breach of privacy—and, through failure to notify, transparency—if there isn’t a distinct disclosure about this particular data collection?

Let’s talk about what people expect when they use the Internet and social networking sites. Though the Internet is a gigantic copying machine, some may not know that data is collected online. They may imagine that, in the absence of notice, the data they post will not be warehoused and redistributed, even though that’s exactly what the Internet does.

There can be special problems when it is the government collecting the information. The White House’s “flag [at] whitehouse [dot] gov” tip line was concerning because it asked Americans to submit information about others. There is a history of presidents amassing “enemies” lists. But this is not the complaint with White House tracking of data posted on its social networking sites.

People typically post things online because they want publicity for those things—often they want publicity for the fact that they are the ones posting, too. When they write letters, they give publicity to the information in the letter and the fact of having sent it. When they hold up signs, they seek publicity for the information on the signs, and their own role in publicizing it.

How strange that taking note of the things people publicize is taken as a violation of their privacy. And failing to notify them of the fact they will be observed and recorded is a failure of transparency.

America, for most of what you do, you do not get “notice” of the consequences. Instead, in the real world and online, you grown-ups are “on notice” that information you put online can be copied, stored, retransmitted, and reused in countless ways. Aside from uses that harm you, you have little recourse against that after you have made the decision to release information about yourself.

The White House is not in the wrong here. If there’s a lesson, it’s that people are responsible for their own privacy and need to be aware of how information moves in the online environment.

Public Information and Public Choice

MalamudOne of the high points of last week’s Gov 2.0 Summit was transparency champion Carl Malamud’s speech on the history of public access to government information – ending with a clarion call for  government documents, data, and deliberation to be made more freely available online. The argument is a clear slam-dunk on simple grounds of fairness and democratic accountability. If we’re going to be bound by the decisions made by regulatory agencies and courts, surely at a bare minimum we’re all entitled to know what those decisions are and how they were arrived at. But as many of the participants at the conference stressed, it’s not enough for the data to be available – it’s important that it be free, and in a machine readable form. Here’s one example of why, involving the PACER system for court records:

The fees for bulk legal data are a significant barrier to free enterprise, but an insurmountable barrier for the public interest. Scholars, nonprofit groups, journalists, students, and just plain citizens wishing to analyze the functioning of our courts are shut out. Organizations such as the ACLU and EFF and scholars at law schools have long complained that research across all court filings in the federal judiciary is impossible, because an eight cent per page charge applied to tens of millions of pages makes it prohibitive to identify systematic discrimination, privacy violations, or other structural deficiencies in our courts.

If you’re thinking in terms of individual cases – even those involving hundreds or thousands of pages of documents – eight cents per page might not sound like a very serious barrier. If you’re trying to do a meta-analysis that looks for patterns and trends across the body of cases as a whole, not only is the formal fee going to be prohibitive in the aggregate, but even free access won’t be much help unless the documents are in a format that can be easily read and processed by computers, given the much higher cost of human CPU cycles. That goes double if you want to be able to look for relationships across multiple different types of documents and data sets.

All familiar enough to transparency boosters. Is there a reason proponents of limited government ought to be especially concerned with this, beyond a general fondness for openness? Here’s one reason.  Public choice theorists often point to the problem of diffuse costs and concentrated benefits as a source of bad policy. In brief, a program that inefficiently transfers a million dollars from millions of taxpayers to a few beneficiaries will create a million dollar incentive for the beneficiaries to lobby on its behalf, while no individual taxpayer has much motivation to expend effort on recovering his tiny share of the benefit of axing the program. And political actors have similarly strong incentives to create identifiable constituencies who benefit from such programs and kick back those benefits in the form of either donations or public support. What Malamud and others point out is that one thing those concentrated beneficiaries end up doing is expending resources remaining fairly well informed about what government is doing – what regulations and expenditures are being contemplated – in order to be able to act for or against them in a timely fashion.

Now, as the costs of organizing dispersed people get lower thanks to new technologies, we’re seeing increasing opportunities to form ad hoc coalitions supporting and opposing policy changes with more dispersed costs and benefits – which is good, and works to erode the asymmetry that generates a lot of bad policy. But incumbent constituencies have the advantage of already being organized and able to invest resources in identifying policy changes that implicate their interests. If ten complex regulations are under consideration, and one creates a large benefit to an incumbent constituent while imposing smaller costs on a much larger group of people, it’s a great advantage if the incumbent is aware of the range of options in advance, and can push for their favored option, while the dispersed losers only become cognizant of it when the papers report on the passage of a specific rule and slowly begin teasing out its implications.

Put somewhat more briefly: Technology that lowers organizing costs can radically upset a truly pernicious public choice dynamic, but only if the information necessary to catalyze the formation of a blocking coalition is out there in a form that allows it to be sifted and analyzed by crowdsourced methods first. Transparency matters less when organizing costs are high, because the fight is ultimately going to be decided by a punch up between large, concentrated interest groups for whom the cost of hiring experts to learn about and analyze the implications of potential policy changes is relatively trivial. As transaction costs fall, and there’s potential for spontaneous, self-identifying coalitions to form, those information costs loom much larger. The timely availability – and aggregability – of information about the process of policy formation and its likely consequences then suddenly becomes a key determinant of the power of incumbent constituencies to control policy and extract rents.

Picture Don Draper Stamping on a Human Face, Forever

Last week, a coalition of 10 privacy and consumer groups sent letters to Congress advocating legislation to regulate behavioral tracking and advertising, a phrase that actually describes a broad range of practices used by online marketers to monitor and profile Web users for the purpose of delivering targeted ads. While several friends at the Tech Liberation Front have already weighed in on the proposal in broad terms – in a nutshell: they don’t like it – I think it’s worth taking a look at some of the specific concerns raised and remedies proposed. Some of the former strike me as being more serious than the TLF folks allow, but many of the latter seem conspicuously ill-tailored to their ends.

First, while it’s certainly true that there are privacy advocates who seem incapable of grasping that not all rational people place an equally high premium on anonymity, it strikes me as unduly dismissive to suggest, as Berin Szoka does, that it’s inherently elitist or condescending to question whether most users are making informed choices about their privacy. If you’re a reasonably tech-savvy reader, you probably know something about conventional browser cookies, how they can be used by advertisers to create a trail of your travels across the Internet, and how you can limit this.  But how much do you know about Flash cookies? Did you know about the old CSS hack I can use to infer the contents of your browser history even without tracking cookies? And that’s without getting really tricksy. If you knew all those things, congratulations, you’re an enormous geek too – but normal people don’t.  And indeed, polls suggest that people generally hold a variety of false beliefs about common online commercial privacy practices.  Proof, you might say, that people just don’t care that much about privacy or they’d be attending more scrupulously to Web privacy policies – except this turns out to impose a significant economic cost in itself.

The truth is, if we were dealing with a frictionless Coaseian market of fully-informed users, regulation would not be necessary, but it would not be especially harmful either, because users who currently allow themselves to be tracked would all gladly opt in. In the real world, though, behavioral economics suggests that defaults matter quite a lot: Making informed privacy choices can be costly, and while an opt-out regime will probably yield tracking of some who would prefer not to be under conditions of full information and frictionless choice, an opt-in regime will likely prevent tracking of folks who don’t object to tracking. And preventing that tracking also has real social costs, as Berin and Adam Thierer have taken pains to point out. In particular, it merits emphasis that behavioral advertising is regarded by many as providing a viable business model for online journalism, where contextual advertising tends not to work very well: There aren’t a lot of obvious products to tie in to an important investigative story about municipal corruption. Either way, though, the outcome is shaped by the default rule about the level of monitoring users are presumed to consent to. So which set of defaults ought we to prefer?

Here’s why I still come down mostly on Adam and Berin’s side, and against many of the regulatory remedies proposed. At the risk of stating the obvious, users start with de facto control of their data. Slightly less obvious: While users will tend to have heterogeneous privacy preferences – that’s why setting defaults either way is tricky – individual users will often have fairly homogeneous preferences across many different sites. Now, it seems to be an implicit premise of the argument for regulation that the friction involved in making lots of individual site-by-site choices about privacy will yield oversharing. But the same logic cuts in both directions: Transactional friction can block efficient departures from a high-privacy default as well. Even a default that optimally reflects the median user’s preferences or reasonable expectations is going to flub it for the outliers. If the variance in preferences is substantial, and if different defaults entail different levels of transactional friction, nailing the default is going to be less important than choosing the rule that keeps friction lowest. Given that most people do most of their Web surfing on a relatively small number of machines, this makes the browser a much more attractive locus of control. In terms of a practical effect on privacy, the coalition members would probably achieve more by persuading Firefox to set their browser to reject third-party cookies out of the box than from any legislation they’re likely to get – and indeed, it would probably have a more devastating effect on the behavioral ad market. Less bluntly, browsers could include a startup option that asks users whether they want to import an exclusion list maintained by their favorite force for good.

On the model proposed by the coalition, individuals have to make affirmative decisions about what data collection to permit for each Web site or ad network at least once every three months, and maybe each time they clear their cookies. If you think almost everyone would, if fully informed, opt out of such collection, this might make sense. But if you take the social benefits of behavioral targeting seriously, this scheme seems likely to block a lot of efficient sharing. Browser-based controls can still be a bit much for the novice user to grapple with, but programmers seem to be getting better and better at making it more easy and automatic for users to set privacy-protective defaults. If the problem with the unregulated market is supposed to be excessive transaction costs, it seems strange to lock in a model that keeps those costs high even as browser developers are finding ways to streamline that process. It’s also worth considering whether such rules wouldn’t have the perverse consequence of encouraging consolidation across behavioral trackers. The higher the bar is set for consent to monitoring, the more that consent effectively becomes a network good, which may encourage concentration of data in a small number of large trackers – not, presumably, the result privacy advocates are looking for. Finally – and for me this may be the dispositive point – it’s worth remembering that while American law is constrained by national borders, the Internet is not. And it seems to me that there’s a very real danger of giving the least savvy users a false sense of security – the government is on the job guarding my privacy! no need to bother learning about cookies! – when they may routinely and unwittingly be interacting with sites beyond the reach of domestic regulations.

There are similar practical difficulties with the proposal that users be granted a right of access to behavioral tracking data about them.  Here’s the dilemma: Any requirement that trackers make such data available to users is a potential security breach, which increases the chances of sensitive data falling into the wrong hands. I may trust a site or ad network to store this information for the purpose of serving me ads and providing me with free services, but I certainly don’t want anyone who sends them an e-mail with my IP address to have access to it. The obvious solution is for them to have procedures for verifying the identity of each tracked user – but this would appear to require that they store still more information about me in order to render tracking data personally identifiable and verifiable. A few ways of managing the difficulty spring to mind, but most defer rather than resolve the problem, and add further points of potential breach.

That doesn’t mean there’s no place for government or policy change here, but it’s not always the one the coalition endorses. Let’s look  more closely at some of their specific concerns and see which, if any, are well-suited to policy remedies. Only one really has anything to do with behavioral advertising, and it’s easily the weakest of the bunch. The groups worry that targeted ads – for payday loans, sub-prime mortgages, or snake-oil remedies – could be used to “take advantage of vulnerable consumers.” It’s not clear that this is really a special problem with behavioral ads, however: Similar targeting could surely be accomplished by means of contextual ads, which are delivered via relevant sites, pages, or search terms rather than depending on the personal characteristics or browsing history of the viewer – yet the groups explicitly aver that no new regulation is appropriate for contextual advertising. In any event, since whatever problem exists here is a problem with ads, the appropriate remedy is to focus on deceptive or fraudulent ads, not the particular means of delivery. We already, quite properly, have rules covering dishonest advertising practices.

The same sort of reply works for some of the other concerns, which are all linked in some more specific way to the collection, dissemination, and non-advertising use of information about people and their Web browsing habits. The groups worry, for instance, about “redlining” – the restriction or denial of access to goods, services, loans, or jobs on the basis of traits linked to race, gender, sexual orientation, or some other suspect classification. But as Steve Jobs might say, we’ve got an app for that: It’s already illegal to turn down a loan application on the grounds that the applicant is African American. There’s no special exemption for the case where the applicant’s race was inferred from a Doubleclick profile. But this actually appears to be something of a redlining herring, so to speak: When you get down into the weeds, the actual proposal is to bar any use of data collected for “any credit, employment, insurance, or governmental purpose or for redlining.” This seems excessively broad; it should suffice to say that a targeter “cannot use or disclose information about an individual in a manner that is inconsistent with its published notice.”

Particular methods of tracking may also be covered by current law, and I find it unfortunate that the coalition letter lumps together so many different practices under the catch-all heading of “behavioral tracking.” Most behavioral tracking is either done directly by sites users interact with – as when Amazon uses records of my past purchases to recommend new products I might like – or by third party companies whose ads place browser cookies on user computers. Recently, though, some Internet Service Providers have drawn fire for proposals to use Deep Packet Inspection to provide information about their users’ behavior to advertising partners – proposals thus far scuppered by a combination of user backlash and congressional grumbling. There is at least a colorable argument to be made that this practice would already run afoul of the Electronic Communications Privacy Act, which places strict limits on the circumstances under which telecom providers may intercept or share information about the contents of user communications without explicit permission. ECPA is already seriously overdue for an update, and some clarification on this point would be welcome. If users do wish to consent to such monitoring, that should be their right, but it should not be by means of a blanket authorization in eight-point type on page 27 of a terms-of-service agreement.

Similarly welcome would be some clarification on the status of such behavioral profiles when the government comes calling. It’s an unfortunate legacy of some technologically atavistic Supreme Court rulings that we enjoy very little Fourth Amendment protection against government seizure of private records held by third parties – the dubious rationale being that we lose our “reasonable expectation of privacy” in information we’ve already disclosed to others outside a circle of intimates. While ECPA seeks to restore some protection of that data by statute, we’ve made it increasingly easy in recent years for the government to seek “business records” by administrative subpoena rather than court order. It should not be possible to circumvent ECPA’s protections by acquiring, for instance, records of keyword-sensitive ads served on a user’s Web-based e-mail.

All that said, some of the proposals offered up seem,while perhaps not urgent, less problematic. Requiring some prominent link to a plain-English description of how information is collected and used constitutes a minimal burden on trackers – responsible sites already maintain prominent links to privacy policies anyway – and serves the goal of empowering users to make more informed decisions. I’m also warily sympathetic to the idea of giving privacy policies more enforcement teeth – the wariness stemming from a fear of incentivizing frivolous litigation. Still, the status quo is that sites and ad networks profitably elicit information from users on the basis of stated privacy practices, but often aren’t directly liable to consumers if they flout those promises, unless the consumer can show that the breach of trust resulted in some kind of monetary loss.

Finally, a quick note about one element of the coalition recommendations that neither they nor their opponents seem to have discussed much – the insistence that there be no federal preemption of state privacy law. I assume what’s going on here is that the privacy advocates expect some states to be more protective of privacy than Congress or the FTC would be, and want to encourage that, while libertarians are more concerned with keeping the federal government from getting involved at all. But really, if there’s an issue that was made for federal preemption, this is it.  A country where vendors, advertisers, and consumers on a borderless Internet have to navigate 50 flavors of privacy rules to sell a banner add or an iTunes track does not sound particularly conducive to privacy, commerce, or informed consumer choice.