Tag: google

The Fatal Conceit of the “Right to be Forgotten”

Intelligence Squared hosted a lively debate last week over the so-called “Right to be Forgotten” embraced by European courts—which, as tech executive Andrew McLaughlin aptly noted, would be more honestly described as a “right to force others to forget.”  A primary consequence of this “right” thus far has been that citizens are entitled to demand that search engines like Google censor the results that are returned for a search on the person’s name, provided those results are “inadequate, irrelevant, or no longer relevant.”  In other words, if you’re unhappy that an unflattering item—such as a news story—shows up as a prominent result for your name, you can declare it “irrelevant” even if entirely truthful and ask Google to stop showing it as a result for such searches, with ultimate recourse to the courts if the company refuses.  Within two months of the ruling establishing the “right,” the company received more than 70,000 such requests.

Hearteningly, the opponents of importing this “right” to the United States won the debate by a large margin, but it occurred to me that one absolutely essential reason for rejecting this kind of censorship process was only indirectly and obliquely invoked.  As even the defenders of the Right to be Forgotten conceded, it would be inappropriate to allow a person to suppress search results that were of some legitimate public value: Search engines are obligated to honor suppression requests only when linking some piece of truthful information to a person’s name would be embarrassing or harmful to that person without some compensating benefit to those who would recieve the information.  Frequent comparison was made to the familiar legal standards that have been applied to newspapers publishing (lawfully obtained) private information about non-public figures. In those cases, of course, the person seeking to suppress the information is typically opposed in court by the entity publishing the information—such as a newspaper—which is at least in a position to articulate why it believes there is some public interest in that information at the time of publication. 

Google Co-Founders Sergey Brin & Larry Page: Health Care Regulation Is Blocking Innovation

At a forum sponsored by Khosla Ventures, Google co-founders Sergey Brin and Larry Page discussed the burden of health care regulations in the United States. When asked, “Can you imagine Google becoming a health company?”, Brin responded:

Health is just so heavily regulated, it’s just a painful business to be in. It’s just not necessarily how I want to spend my time. Even though we do have some health projects, and we’ll be doing that to a certain extent. But I think the regulatory burden in the U.S. is so high that I think it would dissuade a lot of entrepreneurs.

Page agreed:

I am really excited about the possibility of data also to improve health. But I think that’s what Sergey’s saying. It’s so heavily regulated, it’s a difficult area…I do worry, you know, we kind of regulate ourselves out of some really great possibilities.

But surely, the United States does not have government-run health care.

The discussion begins at about 29:00.

E-Mail Privacy Laws Don’t Actually Protect Modern E-mail, Court Rules

In case further proof were needed that we’re long overdue for an update of our digital privacy laws, the South Carolina Supreme Court has just ruled that e-mails stored remotely by a provider like Yahoo! or Gmail are not communications in “electronic storage” for the purposes of the Stored Communications Act, and therefore not entitled to the heightened protections of that statute.

There are, fortunately, other statutes barring unauthorized access to people’s accounts, and one appellate court has ruled that e-mail is at least sometimes protected from government intrusion by the Fourth Amendment, independently of what any statute says. But given the variety of different types of electronic communication services that exist in 2012, nobody should feel too confident that the courts will be prepared to generalize that logic. It is depressingly easy, for example, to imagine a court ruling that users of a service like Gmail, whose letters will be scanned by Google’s computers to automatically deliver tailored advertisements, have therefore waived the “reasonable expectation of privacy” that confers Fourth Amendment protection. Indeed, the Justice Department has consistently opposed proposals to clearly require a warrant for scrutinizing electronic communications, arguing that it should often be able to snoop through citizens’ digital correspondence based on a mere subpoena or a showing of “relevance” to a court.

The critical passage at issue in this case—which involves private rather than governmental snooping—is the definition of “electronic storage,” which covers “temporary, intermediate storage of a wire or electronic communication incidental to the electronic transmission thereof” as well as “any storage of such communication by an electronic communication service for the purposes of backup protection of such communication.” The justices all agreed that the e-mails were not in “temporary, intermediate” storage because the legitimate recipient had already read them. They also agreed—though for a variety of reasons—that the e-mails were not in “backup” storage.

Some took this view on the grounds that storage “by an electronic communication service for the purposes of backup protection” encompasses only separate backups created by the  provider for their own purposes, and not copies merely left remotely stored in the user’s inbox. This strikes me as a somewhat artificial distinction: why do the providers create backups? Well, to ensure that they can make the data available to the end user in the event of a crash. The copy is kept for the user’s ultimate benefit either way. One apparent consequence of this view is that it would make a big difference if read e-mails were automatically “deleted” and moved to a “backup” folder, even though this would be an essentially cosmetic alteration to the interface.

Others argued that a “backup” presumed the existence of another, primary copy and noted there was no evidence the user had “downloaded” and retained copies of the e-mails in question. This view rests on a simple technical confusion. If you have read your Gmail on your home computer or mobile device, then of course a copy of that e-mail has been downloaded to your device—otherwise you couldn’t be reading it. This is obscured by the way we usually talk: we say we’re reading something “on Google’s website”—as though we’ve somehow traveled on the Web to visit another location where we’re viewing the information. But this is, of course, just a figure of speech: what you’re actually reading is a copy of the data from the remote server, now residing on your own device. Moreover, it can’t be necessary for the user to retain that copy, since that would rather defeat the purpose of making a “backup,” which is to guarantee that you still have access to your data after it has been deleted from your main device! The only time you actually need a backup is when you don’t still retain a copy of the data elsewhere.

Still, this isn’t really the court’s fault. Whether or not this interpretation makes sense, it at least arguably does reflect what Congress intended when the Stored Communications Act was passed back in 1986, when there was no such thing as Webmail, when storage space was expensive, and when everyone assumed e-mail would generally vanish from the user’s remote inbox upon download. The real problem is that we’ve got electronic privacy laws that date to 1986, and as a result makes all sorts of distinctions that are nonsensical in the modern context of routine cloud storage. Legislation to drag the statute into the 21st century has been introduced, but alas, there’s little indication Congress is in much of a rush to get it passed.

Three Lessons from the Increasingly Irrelevant Annual Wiretap Report

The 2011 Wiretap Report was released this weekend, providing an overview of how federal and state governments used wiretapping powers in criminal investigations. (Surveillance for intelligence purposes is covered in a separate, far less informative report.) There’s plenty of interesting detail, but here’s the bottom line:

After climbing 34 percent in 2010 the number of federal and state wiretaps reported in 2011 deceased 14 percent. A total of 2,732 wiretaps were reported as authorized in 2011, with 792 authorized by federal judges and 1,940 authorized by state judges…. Compared to the numbers approved during 2010 the number of applications reported as approved by federal judges declined 34 percent in 2011, and the number of applications approved by state judges fell 2 percent. The reduction in wiretaps resulted primarily from a drop in applications for narcotics.

So is the government really spying on us less? Is the drug war cooling off? Well, no, that’s lesson number one: Government surveillance is now almost entirely off the books.

The trouble, as Andy Greenberg of Forbes explains, is that we’ve got analog reporting requirements in a digital age. The courts have to keep a tally of how often they approve traditional intercepts that are primarily used to pick up realtime phone conversationse—96 percent of all wiretap orders. But phone conversations represent an ever-dwindling proportion of modern communication, and police almost never use a traditional wiretap order to pick up digital conversations in realtime. Why would they? Realtime wiretap orders require jumping all sorts of legal hurdles that don’t apply to court orders for stored data, which is more convenient anyway, since it enables investigators to get a whole array of data, often spanning weeks or month, all at once. But nobody is required to compile data on those types of information requests, even though they’re often at least as intrusive as traditional wiretaps.

From what information we do have, however, it seems clear that phone taps are small beer compared to other forms of modern surveillance. As Greenberg notes, Verizon reported fielding more than 88,000 requests for data in 2006 alone. These would have ranged from traditional wiretaps, to demands for stored text messages and photos, to “pen registers” revealing a target’s calling patterns, to location tracking orders, to simple requests for a subscriber’s address or billing information. Google, which is virtually unique among major Internet services in voluntarily disclosing this sort of information, fielded 12,271 government requests for data, and complied with 11,412 of them. In other words, just one large company reports far more demands for user information than all the wiretaps issued last year combined. And again, that is without even factoring in the vast amount of intelligence surveillance that occurs each year: the thousands of FISA wiretaps, the tens of thousands of National Security Letters (which Google is forbidden to include in its public count) and the uncountably vast quantities of data vacuumed up by the NSA. At what point does the wiretap report, with its minuscule piece of the larger surveillance picture, just become a ridiculous, irrelevant formality?

Lesson two: The drug war accounts for almost all criminal wiretaps. Wiretaps may be down a bit in 2011, but over the long term they’ve still increased massively. Since 1997, even as communication has migrated from telephone networks to the internet on a mass scale, the annual number of wiretaps has more than doubled. And as this handy chart assembled by security researcher Chris Soghoian shows, our hopeless War on Drugs is driving almost all of it: for fully 85 percent of wiretaps last year, a drug offense was the most serious offense listed on the warrant application—compared with “only” 73 percent of wiretaps in 1997. Little surprise there: when you try to criminalize a transaction between a willing seller and a willing buyer, enforcement tends to require invasions of privacy. Oddly, law enforcement officials tend to gloss over these figures when asking legislators for greater surveillance authority. Perhaps citizens wouldn’t be as enthusiastic about approving these intrusive and expensive spying powers if they realized they were used almost exclusively to catch dope peddlers rather than murderers or kidnappers.

Speaking of dubious claims, lesson three: The encryption apocalypse is not nigh. As those of you who are both extremely nerdy and over 30 may recall, back in the 1990s we had something called the “Crypto Wars.” As far as the U.S. government was concerned, strong encryption technology was essentially a military weapon—not the sort of thing you wanted to allow in private hands, and certainly not something you could allow to be exported around the world. Law enforcement officials (and a few skittish academics) warned of looming anarchy unless the state cracked down hard on so-called “cypherpunks.” The FBI’s Advanced Telephony Unit issued a dire prediction in 1992 that within three years, they’d be unable to decipher 40 percent of the communications they intercepted.

Fortunately, they lost, and strong encryption in private hands has become the indispensable foundation of a thriving digital economy—and a vital shield for dissidents in repressive regimes. Frankly, it would probably have been worth the tradeoff even if the dire predictions had been right. But as computer scientist Matt Blaze observed back when the 2010 wiretap report was released, Ragnarok never quite arrives. The latest numbers show that investigators encountered encryption exactly 12 times in all those thousands of wiretaps. And how many times did that encryption prevent them from accessing the communication in question? Zero. Not once.

Now, to be sure, precisely because police seldom use wiretap orders for e-mail, that’s also a highly incomplete picture of the cases where investigations run up against encryption walls. But as the FBI once again issues panicked warnings that they’re “going dark” and demands that online companies be requried to compromise security by building surveillance backdoors into their services, it’s worth recalling that we’ve heard this particular wolf cry before. It would have been a disastrous mistake to heed it back then, and on the conspicuously scanty evidence being offered during the encore, it would be crazy to approach these renewed demands with anything less than a metric ton of salt.

Kashmir Hill Has It Right…

on the Google privacy policy change.

The idea that people should be able to opt out of a company’s privacy policy strikes me as ludicrous.

Plus she embeds a valuable discussion among her Xtranormal friends. Highlight:

“Well, members of Congress don’t send angry letters about privacy issues very often.”

“Oh, well, actually, they do.”

Read the whole thing. Watch the whole thing. And, if you actually care, take some initiative to protect your privacy from Google, a thing you are well-empowered to do by the browser and computer you are using to view this post.

The Lives of Others 2.0

Tattoo it on your forearm—or better, that of your favorite legislator—for easy reference in the next debate over wiretapping: government surveillance is a security breach—by definition and by design. The latest evidence of this comes from Germany, where there’s growing furor over a hacker group’s allegations that government-designed Trojan Horse spyware is not only insecure, but packed with functions that exceed the limits of German law:

On Saturday, the CCC (the hacker group) announced that it had been given hard drives containing “state spying software,” which had allegedly been used by German investigators to carry out surveillance of Internet communication. The organization had analyzed the software and found it to be full of defects. They also found that it transmitted information via a server located in the United States. As well as its surveillance functions, it could be used to plant files on an individual’s computer. It was also not sufficiently protected, so that third parties with the necessary technical skills could hijack the Trojan horse’s functions for their own ends. The software possibly violated German law, the organization said.

Back in 2004–2005, software designed to facilitate police wiretaps was exploited by unknown parties to intercept the communications of dozens of top political officials in Greece. And just last year, we saw an attack on Google’s e-mail system targeting Chinese dissidents, which some sources have claimed was carried out by compromising a backend interface designed for law enforcement.

Any communications architecture that is designed to facilitate outsider access to communications—for all the most noble reasons—is necessarily more vulnerable to malicious interception as a result. That’s why technologists have looked with justified skepticism on periodic calls from intelligence agencies to redesign data networks for their convenience. At least in this case, the vulnerability is limited to specific target computers on which the malware has been installed. Increasingly, governments want their spyware installed at the switches—making for a more attractive target, and more catastrophic harm in the event of a successful attack.

Pages