Skip to main content
TechKnowledge

Beacon Lessons

February 8, 2008 • TechKnowledge No. 112

Last December’s controversy over Facebook’s Beacon advertising program has triggered discussion about whether online services are going too far in their use of customer information. The introduction of Beacon is now widely agreed to have been a misstep on Facebook’s part. But it’s important to keep in mind that experimentation is essential to technological progress. Our current intuitions about which types of information sharing respect users’ privacy and which invade it are highly sensitive to implementation details and may also be driven by misleading analogies to older technologies. Only by experimenting with new features, informing their customers of what they’re doing, and listening to customer feedback will companies learn how to think about privacy in the fast‐​changing online world.

Indeed, not all online programs that got negative initial customer reactions have turned out to be bad ideas. Cutting‐​edge online offerings often push users outside of their comfort zones, but users sometimes discover that a service is more useful and less frightening than they expected. This suggests that it would be a mistake to cement our current intuitions about privacy into prophylactic privacy regulations. Policymakers should give online firms like Facebook maximum flexibility to use customer information in innovative ways.

Several examples in recent Internet history illustrate the trade‐​offs inherent in privacy. Real‐​world experience can soften user opposition to new online offerings. Consider cookies, the small files a website can deposit on your hard drive. In 1997, one privacy advocate told Time magazine: “Cookies represent a way of watching consumers without their consent, and that is a fairly frightening phenomenon.”

Yet in recent years, cookies have become ubiquitous online, and user opposition to them has dissipated. One reason is that as the web has become more familiar, the concept of websites “watching you” no longer seems as sinister. Cookies spare users the hassle of providing their usernames and passwords to a site on every visit. Some of the more outrageous claims, such as the idea that cookies collect information about your browsing habits for third parties, are untrue. And browsers have been updated to give users much more fine‐​grained control over which websites can set cookies on their computers.

Another privacy flap that blew over quickly was the 2004 introduction of GMail. Unlike competing sites that relied on generic banner ads, Google customized the ads it displayed based on the content of user’s emails. This generated a lot of controversy. Some people didn’t like the idea of Google’s servers “scanning” peoples’ email. A California state senator even proposed legislation to block the service, and the World Privacy Forum and more than two dozen other privacy and civil liberties groups urged Google to suspend Gmail.

Yet despite the concerns of privacy advocates, users beat down Google’s door for a GMail account. A lot of users thought Gmail’s unique features — a groundbreaking user interface and a ton of storage space — were worth the risks. As I argued at the time, GMail’s critics missed three important points. First, no one other than the recipients ever saw the contents of users’ emails. It’s true that emails are “scanned” by Google’s ad‐​placement algorithms, but it’s equally true that the emails are “scanned” by Google’s spam filter. Few object to that. Second, users appreciate that Gmail’s targeted ads tend to be less numerous and more relevant than traditional banner ads. Finally, the increased revenues generated by Google’s ad‐​targeting program helped finance the expense of providing each user with a then‐​astronomical amount of free storage space.

A more recent privacy controversy involved Facebook’s news feed. In September 2006, Facebook announced that it would begin offering a feature that helped users track changes to their friends’ online profiles. Within days of its September 2006 introduction, 700,000 Facebook users had joined a group opposing the new feature. Facebook scrambled to add additional privacy controls, giving users a say over which categories of information would appear in the news feed. But they refused to back down on the basic concept, and the feature remained on by default. As with GMail, Facebook users’ outrage dissipated once they got the opportunity to try the news feed for themselves. They discovered that they liked the ability to efficiently keep up with their friends’ activities on the site. Indeed, the feature has proven so popular that MySpace recently unveiled a news feed — like feature of its own, although they’ve been more cautious about deploying it.

Too often, scholars make the mistake of treating privacy as a one‐​dimensional attribute, with more privacy always being better than less. But some amount of information sharing is almost always a pre‐​requisite to providing valuable services to users. Total privacy would often mean total isolation from the Internet, and therefore no online services at all.

In reality, privacy involves a series of trade‐​offs. In any online interaction, users give up a certain amount of personal information in exchange for something of value. The goal, therefore, should not be to maximize “privacy” in the abstract, but to maximize users’ knowledge and control over privacy decisions so they can maximize their welfare as they define it for themselves. Legal rules that categorically prohibit online services from retaining particular types of information or from using it in particular ways ultimately may harm users by preventing experimentation that will help both users and online providers learn how privacy ought to work.

Similar experimentation with government data collection and surveillance is not nearly so welcome. Privacy advocates and consumers are right to be worried that data collected in the private sector will be converted to other uses, up to and including massive data‐​collection programs like the ill‐​fated Total Information Awareness program. TIA was defunded in 2003, but the dream of using Americans’ data for security lives on. Never mind that mass surveillance and predictive data mining are ineffective at discovering terrorism. But this is not a reason to regulate innovators in the voluntary sector. Rather, there should be a united push to restore or establish limits on government data collection and use that are appropriate for the online world we live in today.

As to private companies, regulators should remember that different users will have different levels of concern about the ways their information might be used. Some will be very selective about the information they’re willing to share. Others may be willing to share more information in exchange for better or more convenient online services. A well‐​functioning market will allow companies to experiment with a variety of policies and allow consumers to choose the ones they prefer.

Facebook’s Beacon program is an example of a privacy tradeoff that most users did not find attractive. Customers spoke and Facebook responded to their complaints. Whatever problems Beacon might have caused at the time, they caused far less damage than what regulators would do if they enacted new rules that prevented further experimentation.

About the Author