Yesterday’s bombshell announcement that Google is prepared to pull out of China rather than continuing to cooperate with government Web censorship was precipitated by a series of attacks on Google servers seeking information about the accounts of Chinese dissidents. One thing that leaped out at me from the announcement was the claim that the breach “was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.” That piqued my interest because it’s precisely the kind of information that law enforcement is able to obtain via court order, and I was hard-pressed to think of other reasons they’d have segregated access to user account and header information. And as Macworld reports, that’s precisely where the attackers got in:
That’s because they apparently were able to access a system used to help Google comply with search warrants by providing data on Google users, said a source familiar with the situation, who spoke on condition of anonymity because he was not authorized to speak with the press.
This is hardly the first time telecom surveillance architecture designed for law enforcement use has been exploited by hackers. In 2005, it was discovered that Greece’s largest cellular network had been compromised by an outside adversary. Software intended to facilitate legal wiretaps had been switched on and hijacked by an unknown attacker, who used it to spy on the conversations of over 100 Greek VIPs, including the prime minister.
As an eminent group of security experts argued in 2008, the trend toward building surveillance capability into telecommunications architecture amounts to a breach-by-design, and a serious security risk. As the volume of requests from law enforcement at all levels grows, the compliance burdens on telcoms grow also—making it increasingly tempting to create automated portals to permit access to user information with minimal human intervention.
The problem of volume is front and center in a leaked recording released last month, in which Sprint’s head of legal compliance revealed that their automated system had processed 8 million requests for GPS location data in the span of a year, noting that it would have been impossible to manually serve that level of law enforcement traffic. Less remarked on, though, was Taylor’s speculation that someone who downloaded a phony warrant form and submitted it to a random telecom would have a good chance of getting a response—and one assumes he’d know if anyone would.
The irony here is that, while we’re accustomed to talking about the tension between privacy and security—to the point where it sometimes seems like people think greater invasion of privacy ipso facto yields greater security—one of the most serious and least discussed problems with built-in surveillance is the security risk it creates.