Direct privacy legislation or regulation is unlikely to improveon the status quo. Over decades, a batch of policies referred to as"fair information practices" have failed to take hold because oftheir complexity and internal inconsistencies. Even modestregulation like mandated privacy notices have not producedmeaningful improvements in privacy. Consumers generally do not readprivacy policies and they either do not consider privacy much ofthe time, or they value other things more than privacy when theyinteract online.
The online medium will take other forms with changing times, andregulations aimed at an Internet dominated by the World Wide Webwill not work with future uses of the Internet. Privacy regulationsthat work "too well" may make consumers worse off overall, not onlyby limiting their access to content, but by giving super-normalprofits to today's leading Internet companies and by discouragingconsumer-friendly innovations.
The "online" and "offline" worlds are collapsing rapidlytogether, and consumers do not have separate privacy interests forone and the other. Likewise, people do not have privacy interestsin their roles as consumers that are separate from their interestsas citizens. If the federal government is going to work on privacyprotection, it should start by getting its own privacy house inorder.
Chairman Rockefeller, Ranking Member Hutchison, and members ofthe committee, thank you for inviting me to address your hearing on"Consumer Online Privacy."
My name is Jim Harper, and I am director of information policystudies at the Cato Institute. In that role, I study and writeabout the difficult problems of adapting law and policy to thechallenges of the information age. Cato is a market liberal, orlibertarian, think-tank, and I pay special attention to preservingand restoring our nation's founding traditions of individualliberty, limited government, free markets, peace, and the rule oflaw.
My primary focus is on privacy and civil liberties, and I serveas an advisor to the Department of Homeland Security as a member ofits Data Integrity and Privacy Advisory Committee. I am not atechnologist, but a lawyer familiar with technology issues. As aformer committee counsel in both the House and Senate, I understandlawmaking and regulatory processes related to technology andprivacy. I have maintained a web site called Privacilla.org since2000, cataloguing many dimensions of the privacy issue, and I alsomaintain an online federal legislative resource calledWashingtonWatch.com, which has had over 1.6 million visitors in thelast year.
What is Privacy?
Your hearing to explore consumer online privacy is welcome. Thereare many dimensions to privacy, and it is wise to examine all ofthem, making yourselves aware of the plethora of issues andconsiderations before turning to legislation or regulation.
People use the word "privacy" to describe many concerns in themodern world, including fairness, personal security, seclusion, andautonomy or liberty. Given all those salutary meanings, everyonewants "privacy," of course. Few concepts have been discussed somuch without ever being solidly defined. But confusion about themeaning of the word makes legislation or regulation aimed atprivacy difficult.
"Privacy" sometimes refers to the interest violated when aperson's sense of seclusion or repose is upended. Telephone callsduring the dinner hour, for example, spam emails, and -historically - the quartering of troops in private homes undermineprivacy and the vaunted "right to be let alone."
For some, it is marketing that offends privacy - or at leasttargeted marketing based on demographic or specific informationabout consumers. Many people feel something intrinsic to individualpersonality is under attack when people are categorized, labeled,filed, and objectified for commerce based on data about them.
This is particularly true when incomplete data fails to paint anaccurate picture. The worst denial of personality occurs in themarketing area when data and logic get it wrong, servinginappropriate marketing communications to hapless consumers. Acouple who recently lost their baby receives a promotion fordiapers or children's toys, for example. Or mail for a deceasedparent continues coming long after his or her passing. In theinformal sector, communities sometimes attack individuals becauseof the inaccurate picture gossip paints on the powerful medium ofthe Internet.
The "privacy" damage is tangible when credit bureaus and otherreputation providers paint an incomplete or wrong picture.Employers and credit issuers harm individual consumers when theydeny people work or credit based on bad data or bad decisionrules.
Other kinds of "privacy" violations occur when criminals acquirepersonal information and use it for their malign purposes. Thescourge of identity theft is a well known "privacy" problem.Drivers Privacy Protection Acts passed in many state legislaturesand in the U.S. Congress after actress Rebecca Schaeffer wasmurdered in 1989. Her stalker got her residence information fromthe California Department of Motor Vehicles. In a similar notableincident a decade later, Vermont murderer Liam Youens used a databroker to gather information as part of an Internet-advertisedobsession with the young woman he killed.
"Privacy" is also under fire when information demands standbetween people and their freedom to do as they please. Why on earthshould a person share a phone number with a technology retailerwhen he or she buys batteries? The U.S. Department of HomelandSecurity has worked assiduously in what is now called the "SecureFlight" program to condition air travel on the provision ofaccurate identity information to the government, raising theprivacy costs of otherwise free movement.
Laws banning or limiting medical procedures dealing withreproduction offend "privacy" in another sense of the word. Thereare a lot of privacy problems out there, and many of them blendtogether.
Privacy as Control of PersonalInformation
The strongest and most relevant sense of the word "privacy," whichI will focus on here, though, is its "control" sense - privacy ascontrol over personal information. Privacy in this sense isthreatened by the Internet, which is an unusual new medium for manypeople over the age of eighteen.
In his seminal 1967 book Privacy and Freedom, Alan Westincharacterized privacy as "the claim of individuals, groups, orinstitutions to determine for themselves when, how, and to whatextent information about them is communicated to others." A moreprecise, legalistic definition of privacy in the control sense is:the subjective condition people experience when they have power tocontrol information about themselves and when they have exercisedthat power consistent with their interests and values. The"control" sense of privacy alone has many nuances, and I will parsethem here briefly.
Importantly, privacy is a subjective condition. It is individualand personal. One person cannot decide for another what his or hersense of privacy is or should be.
To illustrate this, one has only to make a few comparisons: SomeAmericans are very reluctant to share their political beliefs,refusing to divulge any of their leanings or the votes they havecast. They keep their politics private. Their neighbors may postyard signs, wear brightly colored pins, and go door-to-door to showaffiliation with a political party or candidate. The latter have asense of privacy that does not require withholding informationabout their politics.
Health information is often deemed intensely private. Manypeople closely guard it, sharing it only with doctors, closerelatives, and loved ones. Others consent to have their conditions,surgeries, and treatments broadcast on national television and theInternet to help others in the same situation. More commonly, theyrelish the attention, flowers, and cards they receive when anillness or injury is publicized. Privacy varies in thousands ofways from individual to individual and from circumstance tocircumstance.
An important conclusion flows from the observation that privacyis subjective: government regulation in the name of privacy can bebased only on guesses about what "privacy" should look like. Suchrules can only ape the privacy-protecting decisions that millionsof consumers make in billions of daily actions, inactions,transactions, and refusals. Americans make their highly individualprivacy judgments based on culture, upbringing, experience, and theindividualized costs and benefits of interacting and sharinginformation.
The best way to protect true privacy is to leave decisions abouthow personal information is used to the people affected. Regulatorymandates that take decision-making power away from people willprevent them striking the balances that make them the best off theycan be. Sometimes it is entirely rational and sensible to shareinformation.
At its heart, privacy is a product of autonomy and personalresponsibility. Only empowered, knowledgeable citizens canformulate and protect true privacy for themselves, just as theyindividually pursue other subjective conditions, like happiness,piety, or success.
The Role of Law
The legal environment determines whether people have the power tocontrol information about themselves. Law has dual, conflictingeffects on privacy: Much law protects the privacy-enhancingdecisions people make. Other laws undermine individuals' power tocontrol information.
Various laws foster privacy by enforcing individuals'privacy-protecting decisions. Contract law, for example, allowsconsumers to enter into enforceable agreements that restrict thesharing of information involved in, or derived from,transactions.
Thanks to contract, one person may buy foot powder from anotherand elicit as part of the deal an enforceable promise never to tellanother soul about the purchase. In addition to explicit terms,privacy-protecting confidentiality has long been an implied term inmany contracts for professional and fiduciary services, like law,medicine, and financial services. Alas, legislation and regulationof recent vintage have undermined those protections.
Many laws protect privacy in other areas. Real property law andthe law of trespass mean that people have legal backing when theyretreat into their homes, close their doors, and pull theircurtains to prevent others from seeing what goes on within. The lawof battery means that people may put on clothes and have all theassurance law can give that others will not remove their clothingand reveal the appearance of their bodies without permission.
Whereas most laws protect privacy indirectly, a body of U.S.state law protects privacy directly. The privacy torts providebaseline protection for privacy by giving a cause of action toanyone whose privacy is invaded in any of four ways.
The four privacy causes of action, available in nearly everystate, are:
- Intrusion upon seclusion or solitude, or into privateaffairs;
- Public disclosure of embarrassing private facts;
- Publicity that places a person in a false light in the publiceye; and
- Appropriation of one's name or likeness.
While those torts do not mesh cleanly with privacy as definedhere, they are established, baseline, privacy-protecting law.
Law is essential for protecting privacy, but much legislationplays a significant role in undermining privacy. Dozens ofregulatory, tax, and entitlement programs deprive citizens of theability to shield information from others. You need only look atthe Internal Revenue Service's Form 1040 and related tax forms tosee that.
Consumer Knowledge and Choice
I wrote above about the role of personal responsibility in privacyprotection. Perhaps the most important, but elusive, part ofprivacy protection is consumers' exercise of power over informationabout themselves consistent with their interests and values. Thisrequires consumers and citizens to be aware of the effects theirbehavior will have on exposure of information about them.
Technology and the world of commerce are rapidly changing, andpersonal information is both ubiquitous and mercurial.Unfortunately, there is no horn that sounds when consumers aresufficiently aware, or when their preferences are being honored.But study of other, more familiar, circumstances reveals howindividuals have traditionally protected privacy.
Consider privacy protection in the physical world. Formillennia, humans have accommodated themselves to the fact thatpersonal information travels through space and air. Withoutunderstanding how photons work, people know that hiding theappearance of their bodies requires them to put on clothes. Withoutunderstanding sound waves, people know that keeping what they sayfrom others requires them to lower their voices.
From birth, humans train to protect privacy in the "natural"environment. Over millions of years, humans, animals, and evenplants have developed elaborate rules and rituals of informationsharing and information hiding based on the media of light andsound.
Tinkering with these rules and rituals today would be absurd.Imagine, for instance, a privacy law that made it illegal toobserve and talk about a person who appeared naked in publicwithout giving the nudist a privacy notice and the opportunity toobject. People who lacked the responsibility to put on clothesmight be able to sue people careless enough to look at them andrecount what they saw. A rule like that would be ridiculous.
The correct approach is for consumers to be educated about whatthey reveal when they interact online and in business so that theyknow to wear the electronic and commercial equivalents ofclothing.
Of all the online privacy concerns, perhaps the most frettinghas been done about "behavioral advertising" - sometimes referredto as "psychographic profiling" to get us really worked up. What istruly shocking about this problem, though, is that the remedy formost of it is so utterly simple: exercising control over thecookies in one's browser.
Cookies are small text files that a web site will ask to placein the memory of computers that visit it. Many cookies havedistinct strings of characters in them that allow the web site to"recognize" the computer when it visits the site again. When asingle domain places content across the web as a "third party" -something many ad networks do - it can recognize the same computermany places and gain a sense of the interests of the user.
The solution is cookie control: In the major browsers (Firefoxand Internet Explorer), one must simply go to the "Tools" pull-downmenu, select "Options," then click on the "Privacy" tab tocustomize one's cookie settings. In Firefox, one can decline toaccept all third-party cookies (shown inset), neutering thecookie-based data collection done by ad networks. In InternetExplorer, one can block all cookies, block all third-party cookies,or even choose to be prompted each time a cookie is offered.
Again, consumers educated about what they reveal when theyinteract online can make decisions about how to behave that willprotect privacy much better - in all online contexts - thanconsumers unaware of how the world around them works.
Can Direct Regulation Protect PrivacyBetter?
Above, I wrote about how law protects people's privacy-protectingdecisions. This unfortunately leaves them with the responsibilityof making those decisions. Naturally, most privacy advocates -myself included - believe that people do not do enough to protecttheir privacy. Consciously or not, people seem to prioritize theshort-term benefits of sharing personal information over thelong-term costs to their privacy.
This poses the question: Can direct regulation protect consumersprivacy better than they can protect themselves?
There is a decades-long history behind principles aimed atprotect privacy and related interests, principles that are oftenput forward as a framework for legislative or regulatorydirectives.
In the early 1970s, a group called "The Secretary's AdvisoryCommittee on Automated Personal Data Systems" within the Departmentof Health, Education, and Welfare did an important study ofrecord-keeping practices in the computer age. The intellectualcontent of its report, commonly known as the "HEW Report," formedmuch of the basis of the Privacy Act of 1974. The report dealtextensively with the use of the Social Security Number as theissues stood at that time.
The HEW report advocated the following "fair informationpractices":
- There must be no personal-data record-keeping systems whosevery existence is secret.
- There must be a way for an individual, to find out whatinformation about him is in a record and how it is used.
- There must be a way for an individual to prevent informationabout him obtained for one purpose from being used or madeavailable for other purposes without his consent.
- There must be a way for an individual to correct or amend arecord of identifiable information about him.
- Any organization creating, maintaining, using, or disseminatingrecords of identifiable personal data must assure the reliabilityof the data for their intended use and must take reasonableprecautions to prevent misuse of the data.
These things sound wonderful in the abstract, but theirrelevance, worthiness, and cost-justifications vary widely fromcircumstance to circumstance.
In 1980, the Organization for Economic Cooperation andDevelopment (OECD) issued similar, if more detailed guidelines. TheOECD Guidelines involve eight principles, which in differentvariations are often touted as "fair information practices" or"fair information practice principles."
They include a "Collection Limitation Principle," a "DataQuality Principle," a "Purpose Specification Principle," a "UseLimitation Principle," a "Security Safeguards Principle," an"Openness Principle," an "Individual Participation Principle," andan "Accountability Principle." The full OECD principles, in theirsprawling glory, are reproduced in a footnote below.
In a 2000 report, the Federal Trade Commission came out with arelatively briefer list of "fair information practices" (notice,choice, access, and security) and asked Congress for authority toimpose them on the businesses of the country, even though acommittee convened by the FTC could not reconcile the inherenttensions between access and security. Congress declined to take theFTC's advice.
These examples illustrate one of the problems with the idea of"baseline privacy regulation" for the Internet that has been aconsistent call of many for over a decade. There are many goodideas and good practices described in the HEW Report, the OECDGuidelines, and in various other iterations of "fair informationpractices," but tensions among the principles and variations intheir applicability to different circumstances make "FIPs" a poorguide for smart legislating.
"Fair information practices" remain largely aspirational afternearly 40 years, and where they have been implemented, privacy hasnot blossomed. The principal example is the Privacy Act of 1974,which has done little to give American citizens control overinformation the government collects. It is shot through withexceptions, and it is largely a paper tiger.
The Fair Credit Reporting Act has guided the development of thecredit reporting industry for four decades, while insulating creditbureaus from state tort laws. During that period, the industry hasbecome highly cartelized, consisting of three players (as discussedbelow, a typical consequence of regulatory barriers to entry). Ithas failed to innovate and become the reputation and identityservice that the world of e-commerce could use. And - mostimportantly for these purposes - credit reporting is aconsumer-unfriendly industry. Rather than working with consumers todevelop mutually beneficial personal data repositories, the creditreporting industry serves its financial industry partners first,federal regulators second, and consumers as a rather distantafterthought.
The privacy regulations implemented under the Health InsurancePortability and Accountability Act are sometimes touted asreflecting "fair information practices." (With their breadth, anygood data practice is arguably a FIP.) But health privacy has notmaterialized since Congress shrugged its shoulders and handed theprivacy problem to the Department of Health and Human Services.Pre-HIPAA studies showing that patients sometimes avoided treatmentdue to privacy worries have not been matched by post-HIPAA studiesshowing that consumers confident of health privacy are gettingmedical care they would not have gotten.
Fair information practices are widely touted as models fordirect regulation that would protect privacy. But the examples wehave of FIP-style laws and regulations have not delivered privacy.Privacy protection is hard, and it is not amenable to top-downsolutions.
Keeping it Simple: What About PrivacyNotice?
If the full suite of "fair information practices" is too intricateand internally inconsistent to produce a flowering of privacyacross the land, perhaps some minimal privacy regulation would movethe ball in the right direction. Mandated privacy notices arewidely regarded as a step that would put consumers in a position toprotect privacy themselves.
One would think. But they haven't.
A decade ago, market pressure spurred commercial web sites toadopt and publish privacy policies. The FTC found in its 2000report that 100% of the most popular sites on the web and 88% ofrandomly sampled sites had privacy disclosures of some kind. Thiswas in the absence of any regulation requiring notice; it wassimply the product of market-based consensus that privacy noticewas an appropriate business practice.
However, over the ensuing decade it has become clear thatprivacy notices do not materially improve consumers' privacypractices. The Federal Trade Commission, other agencies,researchers like Lorrie Faith Cranor at Carnegie MellonUniversity's "CUPS" laboratory, and others are diligently pursuingstrategies to make notices effective at communicating privacyinformation to consumers in the hope that they will act on thatinformation. But none has yet borne fruit.
The FTC and seven other regulators recently revealed a new,"short" financial privacy notice (required annually of financialservices providers by the Gramm-Leach-Bliley Act) that they say"will make it easier for consumers to understand how financialinstitutions collect and share information about consumers."Perhaps privacy awareness will flourish in the financial servicesarea under this new regime, validating the widely derided privacynotices that clutter Americans' mailboxes. More likely, artificial"notice" will continue to lose currency as a tool for generatingconsumer focus on privacy.
Nutrition labels, the beloved model for privacy notices, havefailed to stem the tide of fat washing over Americans' waistlines.Consumer behavior is difficult to control, as it should be in afree country.
Even the growth of handheld devices - an incremental step incomparison to what may come in the future - challenges the idea ofnotice. Given the very small screen space of many devices, where isa notice to be located? And where is a notice to be located whenthere isn't a hypertext "link" structure to follow?
Google, after all, is a search engine. In fact, it is the searchengine that augured the decline of the Internet "portal" in favorof more fluid, search-based entrÃ©e to the web. Yet theCalifornia law requires a portal-style link, something that Googleagonized over, being very proud of their very clean home page.Google now has a privacy link on its home page. It has cured itsonline paperwork violation.
As this story illustrates, Americans are not going on the webthrough portals any more. Americans are not going "online" sittingat computers looking at web pages any more. There is no end to theprotocols that people may use to communicate on the Internet, and anotice regime designed for the World Wide Web so popular in thedecade just past will fail to reach people in the decades tocome.
What Does "Online" Mean Anyway? And Why Is ItImportant?
It is important to consider changes in technology of a differentkind, particularly the vanishing border between "online" and"offline." As I deliver my oral testimony to the committee today,for example, I will be nominally "offline." However, audio andvideo of my presentation may be streamed live over the Internet orrecorded and posted on the committee's web site or elsewhere.Reporters and researchers may take snippets of what I say and weavethem into their work, posting those works online.
The phone in my pocket will be signaling its whereabouts (andinferentially mine) to nearby cell towers. Video of me entering,walking around inside, and leaving the Russell building may becaptured and stored by the Capitol Police. Should the need arise,they may move this video into permanent storage.
There are privacy consequences from all these things. More thanothers, I suppose, I knowingly and willingly encounter privacy lossin order to be here and speak to you.
But what is the difference between the privacy consequences ofthis "offline" behavior and "online" behavior. Why should specialprivacy protections kick in when one formally sits down in front ofa computer or uses a handheld device to go "online" if so much of"offline" life means the same thing?
The distinction between online and offline is blurring, andlegislation or regulation aimed at protecting consumers "online"could create strange imbalances between different spheres of life.Consumers do not have a set of privacy interests that applies tothe "online" world and another set that applies "offline."
To address online privacy alone is to miss the mark. This is notto say that the flesh-and-blood world should have privacyregulations like those that have been dreamed up for the Internet.Rather, privacy on the Internet might better be produced the way itis in the "real" world, by people aware of the consequences oftheir behavior acting in their own best interests.
Privacy Regulation Might Also Work "TooWell"
Consumer privacy legislation and regulation might fail because theymiss new protocols or technologies, uses of the Internet that arenot web-based, for example. But there is an equally plausiblelikelihood that privacy regulation works too well, in a couple ofdifferent senses.
Privacy regulation that works "too well" would give people moreprivacy than is optimal, making consumers worse off overall.Consumers have interests not just in privacy, but also inpublicity, access to content, customization, convenience, lowprices, and so on. Many of these interests are in tension withprivacy, and giving consumers privacy at the cost of other thingsthey prefer is not a good outcome.
The dominant model for producing Internet content - all theinteraction, commentary, news, imagery, and entertainment that hasthe Internet thriving - is advertising support. Many of the mostpopular services and platforms are "free" because they hostadvertisements directed at their visitors and users. Part of thereason they can support themselves with advertising is because theyhave good information about users that allow ads to beappropriately targeted. It is a fact that well-targeted ads aremore valuable than less-well-targeted ads.
This is important to note: Most web-based businesses do not"sell" information about their users. In targeted onlineadvertising, the business model is generally to sell advertisersaccess to people ("eyeballs") based on their demographics. It isnot to sell individuals' personal and contact info. Doing thelatter would undercut the advertising business model and theprofitability of the web sites carrying the advertising.
If privacy regulation "blinded" sites and platforms to relevantinformation about their visitors, the advertising-supported modelfor Internet content would likely be degraded. Consumers would beworse off - entombed by an excess of privacy when their preferenceswould be to have more content and more interaction than regulationallows advertising to support.
If the Federal Trade Commission's recommendations for "notice,choice, access, and security" had been fully implemented in 2000,for example, it is doubtful that Google would have had the samesuccess it has had over the last decade. It might be a decent,struggling search engine today. But, unable to generate the kind ofincome it does, the quality of search it produces might be lower,and it may not have had the assets to produce and supportfascinating and useful products like Gmail, Google Maps, GoogleDocs, and the literally dozens of author products it providesconsumers.
Not having these things at our fingertips is difficult toimagine - it is much easier to assume that the Google juggernautwas fated from the beginning - but the rise of Google and all theaccess to information it gives us was contingent on a set ofcircumstances that allowed it to target ads to visitors in a highlycustomized and - to some - privacy-dubious way.
As a thought experiment, imagine taking away Google, Facebook,Apple's suite of consumer electronics (and the app universe thathas sprung up within it), and the interactivity that AT&Tfacilitates. Consumers would rightly howl at the loss of richnessto their lives, newly darkened by privacy. And we would all beworse off as the economy and society were starved of access toinformation.
All this is just to show that trading on personal informationcan make consumers better off overall. It is not to say that Googleor any other company is the be-all and end-all, or that publicpolicy should do anything to "prefer" any company. In fact, theother way that privacy regulation might work "too well" is bygiving today's leading firms an advantage against futurecompetitors.
A "barrier to entry" is something that prevents competition fromentering a market. Barriers to entry often allow incumbents (likethe established companies joining me at the witness table today) tocharge higher prices and make greater profits than they otherwisewould. Common barriers to entry (fair or unfair) include customerloyalty, economies of scale, control of intellectual property, andnetwork effects, to name a few.
Government regulation can act as a barrier to entry in a fewdifferent ways. Aside from direct regulation of entry throughlicensing or grants of monopoly (issues not relevant here),incumbent firms can comply with regulations at a lower cost persales unit. With a staff of lawyers already in place, the cost percustomer of interpreting and applying any regulation are lower forlarge firms. Whether regulation is merited and tailored or not,small competitors "pay more" to comply with it. Regulation impedestheir efforts to challenge established firms.
Established firms can strengthen this dynamic by taking part incrafting legislation and regulation. Their lobbyists, lawyers, andinterest-group representatives - the good people gathered at thishearing today - will crowd around and work to protect theirclients' interests in whatever comes out of the drafting process,here in Congress and at whatever agency implements any new law.Small, future competitors - unrepresented - will have no say, andnew ways of doing business those competitors might have introducedmay be foreclosed by regulation congenial to today's winners.
In his paper, The Durable Internet, my colleague, Cato adjunctfellow Timothy B. Lee, provides a useful history of how regulatoryagencies have historically been turned to protecting the companiesthey are supposed to regulate. This would occur if the FCC were toregulate Internet service under a "net neutrality" regulationregime. It would occur if a federal agency were tasked withprotecting privacy. It appears to have happened with the MineralsManagement Service. The dynamic of "agency capture" is a mainstayof the regulatory studies literature.
Returning to the example of Google and the FTC's proposal forcomprehensive regulation a decade ago: Had Congress given the FTCauthority to impose broad privacy/fair information practiceregulations, companies like Microsoft and Yahoo! may have turnedthe regulations to their favor. Today, the company the producesthat most popular operating system might still be the most powerfulplayer, and we might still be accessing the web through a portal.Consumers would be worse off for it.
For all the benefits today's leading companies provide, there isno reason they should not be subjected to as much competition asour public policy can allow. The spur of competition benefitsconsumers by lowering prices and driving innovations. Privacyregulation might work "too well" for them, locking in competitiveadvantages that turn away competition and allow them super-normalprofits.
Comparisons between existing companies and future competitorsare one thing. But a major defect of most proposals for privacyprotection are their bald omission of an entire category of privacythreat: governments.
Privacy for Consumers But Not forCitizens?
Just as people do not have one set of privacy interests for theonline world and one for offline, they do not have one set ofprivacy interests for commerce and another set for government. Theprivacy protections Americans have as consumers should be madeavailable to them as citizens.
Indeed, given the unique powers of governments - to take lifeand liberty - Americans should have greater privacy protectionsfrom government than they do from private sector entities.
Governments thrive on information about people. Personalinformation allows governments to serve their citizenry better, tocollect taxes, and to enforce laws and regulations. But governmentsstand in a very different position to personal information thanbusinesses or individuals. Governments have the power to take anduse information without permission. And there is little recourseagainst governments when they use information in ways that areharmful or objectionable.
In the modern welfare state, governments use copious amounts ofinformation to serve their people. A program to provide medicalcare, for example, requires the government to collect abeneficiary's name, address, telephone number, sex, age, incomelevel, medical condition, medical history, providers' names, andmuch more.
Governments also use personal information to collect taxes. Thisrequires massive collections of information without regard towhether an individual views it as private: name, address, phonenumber, Social Security number, income, occupation, marital status,investment transactions, home ownership, medical expenses,purchases, foreign assets. The list is very, very long.
A third use government makes of personal information is toinvestigate crime and enforce laws and regulations. Governments'ability to do these things correlates directly to the amount ofinformation they can collect about where people go, what they do,what they say, to whom they say it, what they own, what they think,and so on. We rely on government to investigate wrongdoing byexamining information that is often regarded as private in thehands of the innocent. It is a serious and legitimate concern ofcivil libertarians that government collects too much informationabout the innocent in order to reach the guilty. The incentivesthat governments face all point toward greater collection and useof personal information about citizens. This predisposes them toviolate privacy.
Yet "consumer privacy" bills planned and introduced in thecurrent Congress do nothing to protect Americans' privacy fromgovernment. The leading proposals in the House - Rep. Boucher's(D-VA) draft legislation and H.R. 5777, the "BEST PRACTICES Act,"introduced by Rep. Rush (D-IL) - simply exclude the federalgovernment from their provisions.
In fairness, there may be jurisdictional reasons for theseexemptions, but the hypocrisy would be a little too rank if thefederal government were to impose privacy regulations on theprivate sector while its own profligacy with citizens' informationcontinues.
If there is to be privacy legislation, the U.S. Congress shoulddemonstrate the commitment of the federal government to getting itsown privacy house in order. The federal government should practicewhat it preaches about privacy.
Privacy is a complicated human interest, of that there should be nodoubt. In this long written testimony I have only begun to scratchthe surface of the issues.
People use the word privacy to refer to many different humaninterests. The strongest sense of the word refers to control ofpersonal information, which exists when people have legal power tocontrol information and when they exercise that control consistentwith their interests and values.
Direct privacy legislation or regulation is unlikely to improveon the status quo. Over decades, a batch of policies referred to as"fair information practices" have failed to take hold because oftheir complexity and internal inconsistencies. In the cases whenthey have been adopted, such as in the Privacy Act of 1974, privacyhas not blossomed.
Even modest regulation like mandated privacy notices have notproduced privacy in any meaningful sense. Consumers generally donot read privacy policies and they either do not consider privacymuch of the time or value other things more than privacy when theyinteract online.
The online medium will take other forms with changing times, andregulations aimed at an Internet dominated by the World Wide Webwill not work with future uses of the Internet, as we are beginningto see in handheld devices. Privacy regulations that work "toowell" may make consumers worse off overall, not only by limitingtheir access to content, but by giving super-normal profits totoday's leading Internet companies and by discouragingconsumer-friendly innovations.
It is an error to think that there are discrete "online" and"offline" experiences. Consumers do not have separate privacyinterests for one and the other. Likewise, people do not haveprivacy interests in their roles as consumers, and a separate setof interests as citizens. If the federal government is going towork on privacy protection, the federal government should start bygetting its own privacy house in order.
Privacy Advocates Who Don't Understand Privacy
In 2006 an engineer working on an experimental WiFi project forGoogle wrote a piece of code that sampled publicly broadcast data -the information that unencrypted WiFi routers make available byradio to any receiver within range. A year later, this code wasincluded when Google's mobile team started a project to collectbasic WiFi network data using Google's Street View cars.
When Google discovered this issue, they stopped running theirStreet View cars and segregated the data on their network, whichthey then disconnected to make it inaccessible. Google announcedthe error to the public and have since been working with Europeandata authorities to try to get rid of it. The European authoritiesare making them keep it pending their investigations.
Now a U.S. advocacy group, tripping over itself to make this afederal issue, has done more to invade privacy than Google did.
WiFi nodes are like little radio stations. When they areunencrypted, the data they send out can be interpreted fairlyeasily by whoever receives the radio signals.
Radio signals can travel long distances, and they pass throughor around walls and vehicles, people, shrubs and trees.Broadcasting data by radio at the typical signal-strength for aWiFi set-up creates a good chance that it is going to traveloutside of one's house or office and beyond one's property lineinto the street.
For this reason, people often prevent others accessing theinformation on Wifi networks by encrypting them. That is, theyscramble the data so that it is gibberish to anyone who picks itup. (Or at least it takes an enormous amount of computing power tounscramble the signal.) Most people encrypt their WiFi networksthese days, which is a good security practice, though it deniestheir neighbors the courtesy of using a handy nearby Internetconnection if they need to.
Even on an unencrypted WiFi network, much sensitive content willbe encrypted. Transactions with banks or payments on commerce siteswill typically be encrypted by the web browser and server on theother end (the "s" in "https:" indicates this is happening), sotheir communications are indecipherable wherever they travel.
Given all this, it's hard to characterize data sent out byradio, in the clear, as "private." The people operating theseunsecure WiFi nodes may have wanted their communications to beprivate. They may have thought their communications were private.But they were sending out their communications in the clear, byradio - again, like a little radio station broadcasting to anyonein range.
Picking up the data it did using its Street View cars, Googlecaptured whatever it did during the few seconds that the car was inrange of the unencrypted WiFi node. The flashes of data would bequite similar to driving past a row of apartments and seeingsnippets of life inside whichever apartments had not fully drawntheir curtains. Often, there is nothing happening at all. Once in awhile, there may be a flicker of something interesting, but it isnot tied to any particular identity.
Google never used this useless data. Not a single fact about asingle identifiable WiFi user has been revealed. No personalinformation - much less private information - got any meaningfulexposure.
But a U.S. advocacy group seeking to make a federal case of thisstory tripped over its privacy shoelaces in doing so. Apparently,researchers for this self-described consumer organization looked upthe home addresses of Members of Congress. They went to the homesof these representatives, and they "sniffed" to see if there wereWiFi networks in operation there. Then they publicized what theyfound, naming Members of Congress who operate unencrypted WiFinodes.
If you care about privacy, this behavior is worse than whatGoogle did. In its gross effort to rain attention on Google'smisdeed, this group collected information on identifiableindividuals - these members of Congress - and put that informationin a press release. That is more "stalkerish" and more exposing ofpersonal information than driving past in an automobile picking upwith indifference whatever radio signals are accessible from thestreet.
The behavior of this group is not a privacy outrage. Politiciansvolunteer to be objects of this kind of intrusion when they decidethat they are qualified to run for federal elective office. Itsimply illustrates how difficult the "privacy" issue is, when agroup pulling off a stunt to draw attention to privacy concernsdoes more harm to privacy than the "wrongdoer" they are trying tohighlight.
Facebook's "News Feed": Consumers Privacy Interests areUnpredictable and Changing
In September 2006, Facebook - the rapidly growing "socialnetworking" site - added a feature that it called "News Feed" tothe home pages of users. News Feed would update each user regularlyon their home pages about the activities of their friends, usinginformation that each friend had posted on the site. "News Feed"was met with privacy outrage. In the view of many Facebook users,the site was giving too much exposure to information aboutthem.
But Facebook pushed back. In a post on the Facebook blog titled,"Calm down. Breathe. We hear you," CEO Mark Zuckerberg wrote:
This is information people used to dig for on a daily basis,nicely reorganized and summarized so people can learn about thepeople they care about. You don't miss the photo album about yourfriend's trip to Nepal. Maybe if your friends are all going to aparty, you want to know so you can go too. Facebook is about realconnections to actual friends, so the stories coming in are ofinterest to the people receiving them, since they are significantto the person creating them.
Though Facebook did make some changes, users ultimately foundthat News Feed added value to their experience of the site. Today,News Feed is an integral part of Facebook, and many users wouldprobably object vociferously if it were taken away.
This is not to say that Facebook is always right or that it isalways going to be right. It illustrates how consumers' privacyinterests are unsettled and subject to change. Their self-reportedinterests in privacy may change - and may change rapidly.
The Facebook "News Feed" example is one where consumers lookedat real trade-offs between privacy and interaction/entertainment.After balking, they ultimately chose more of the latter.
Consider how well consumers might do with privacy when they arenot facing real trade-offs. Consumer polling on privacy generallyuses abstract questions to discover consumers' stated privacypreferences. There is little policymaking value in polling data.Determining consumers' true interests in privacy and other valuesis difficult and complex, but it is taking place every day in therigorous conditions of the marketplace, where market share andprofits are determined by companies' ability to serve consumers inthe best ways they can devise.
Some economic studies have suggested how much people valueprivacy. The goal of privacy advocacy should not be to forceunwanted privacy protections on a public that does not want them,but to convince consumers to value privacy more.