Topic: Telecom, Internet & Information Policy

Google (et al.) and Government Surveillance

Ars Technica reports here on the “provocative claim that Google is currently cooperating with secret elements in the US government, including the CIA.”  This is a possibility I blogged about here a couple of weeks ago.

It’s something people should be concerned about, and people’s concern is something Google should be concerned about.  

People averse to the risk of exposing their online activities to government surveillance should take Google’s studious silence as confirmation. 

Fake Boarding Pass Generator Underscores ID Woes

Yesterday, the blogosphere crackled with news that ‘net surfers could use a website to generate fake boarding passes that would enable them to slip past airport security and gain access to airport concourses. The news provides a good opportunity to illustrate a credentialing (and identity) system, how it works, and how it fails.

It’s very complicated, so I’m going to try to take it slowly and walk through every step.

The Computer Assisted Passenger Prescreening System (CAPPS) separates commercial air passengers into two categories: those deemed to require additional security scrutiny — termed “selectees” — and those who are not. When a passenger checks in at the airport, the air carrier’s reservation system uses certain information from the passenger’s itinerary for analysis in CAPPS. This analysis checks the passenger’s information against the CAPPS rules and also against a government-supplied “watch list” that contains the names of known or suspected terrorists.

Flaws in the design and theory of the CAPPS system make it relatively easy to defeat. A group with any sophistication and motivation can test the system to see which of its members are flagged, or what behaviors cause them to be flagged, then adjust their plans accordingly.

A variety of flaws and weaknesses inhabit the practice of watch-listing. Simple name-matching causes many false positives, as so many Robert Johnsons will attest. But the foremost weakness is that a person who is not known to be a threat will not be listed. Watch-listing does nothing about people or groups acting for the first time.

In addition, a person who is known and listed can elude the system by using an alias. The use of a false or synthetic identity (and thus an inaccurate boarding card) could assist in this. But the simplest wrongful use of this fake boarding card generator would be to make a boarding card that allows a known bad person to receive no more security scrutiny than all the good people.

When CAPPS finds that a passenger should be given selectee status, this is transmitted to the check-in counter where a code is printed on the passenger’s boarding pass. At the checkpoint, the boarding pass serves as a credential indicating that the person is entitled to enter the concourse, and also indicating what kind of treatment the person should get — selectee or non-selectee. The credential is tied to the person bearing it by also checking a government-issued ID.

In a previous post, I included a schematic showing how identification cards work (from my book Identity Crisis). This might be helpful to review now because credentials like the boarding pass work according to the same three-step process: First, an issuer (the airline) collects information, including what status the traveler has. Next, the issuer puts it onto a credential (the boarding pass). Finally, the verifier or relying party (the checkpoint agent) checks the credential and accords the traveler the treatment that the credential indicates.

Checking the credential bearer’s identification, a repeat of this three-step process, and comparing the names on both documents, ties the boarding pass to the person (and in the process imports all the weaknesses of identification cards).

Each of these steps is a point of weakness. If the information is bad, such as when a malefactor is not known, the first step fails and the system does not work. If the malefactor is using someone else’s ticket and successfully presents a fake ID, the third step has failed and the system does not work.

The simple example we’re using here breaks the second step. A person traveling under his own name may present a boarding pass for the flight for which he has bought a ticket — but the false boarding pass he presents does not indicate selectee status. He has eluded the CAPPS system and the watch list.

The fake boarding pass generator does not create a new security weakness. It reveals an existing one. Though some people may want to, it’s important not to kill the messenger (who, in this case, is a Ph.D. student in security infomatics at Indiana University who created the pass generator to call attention to the problem). As I’ve said before, identity-based security is terribly weak. Its costs — in dollars, inconvenience, economic loss, and lost privacy — are greater than its security benefit.

Hopefully, the revelation that people can use fake boarding passes to elude CAPPS and watch-lists is another step in the long, slow process of moving away from security systems that don’t work well, toward security systems that do. Good security systems address tools and methods of attack directly. They make sure all passengers on an airplane lack the capacity to do significant harm.

Does Online Vigilantism Make Sense?

Clyde Wayne Crews of the Competitive Enterprise Institute has a new piece out on cybersecurity, online vigilantism, and white hat hacking. It explores the many avenues for countering bad actors in the online environment, and draws a line between reaching out to aggress against them and using deception and guile to confound and frustrate them.

The piece is apparently motivated by the the “Peer-to-Peer Piracy Prevention Act,” introduced a couple of years ago, which would have given the music industry immunity from liability for accessing peer-to-peer networks and attempting to prevent trade in their copyrighted material. Crews says “the industry is bound to try again.” His conclusion: “Explicit liability protection for particular classes of white hat hacking is ill advised… . A green light for hacking can work against broader cybersecurity and intellectual property goals, and there are alternatives.”

Should Government Identity Documents Use RFID?

Interesting question - and perhaps simpler than many people think. 

Back in June, the Department of Homeland Security’s Data Privacy and Integrity Advisory Committee (on which I serve) published a draft report on the use of RFID for human tracking.  (“RFID” stands for radio frequency identification, a suite of technologies that identify items - and, if put in cards, track people - by radio.)  The report poured cold water on using RFID in government-mandated identity cards and documents.  This met with some consternation among the DHS bureaus that plan to use RFID this way, and among the businesses eager to sell the technology to the government.

Despite diligent work to put the report in final form, the Committee took a pass on it at its most recent meeting in September - nominally because new members of the Committee had not had time to consider it.  The Committee is expected to finish this work and finalize the report in December.

But skeptics of the report continue to come out of the woodwork.  Most recently, the Center for Democracy and Technology wrote a letter to the Privacy Committee encouraging more study of the issue, implicitly discouraging the Committee from finding against RFID-embedded government documents.  CDT invited ”a deeper factual inquiry and analysis [that] would foster more thoughtful and constructive public dialog.”

If the correct answer is ”no,” do you have to say “yes” to be constructive? RFID offers no anti-forgery or anti-tampering benefit over other digital technologies that can be used in identification cards - indeed it has greater security weaknesses than alternatives.  And RFID has only negligible benefits in terms of speed and convenience because it does not assist with the comparison between the identifiers on a card and the bearer of the card.  This is what takes up all the time in the process of identifying someone.   (If that’s too much jargon, you need to read my book Identity Crisis: How Identification is Overused and Misunderstood.)

I shared my impression of CDT’s comments in an e-mail back to Jim Dempsey.  Jim and CDT do valuable work, but I think they are late to this discussion and are unwittingly undermining the Privacy Committee’s work to protect Americans’ privacy and civil liberties. My missive helps illustrate the thinking and the urgency of this problem, so after the jump, the contents of that e-mail:

Jim:

I’ve had time now to read your follow-up comments on the Department of Homeland Security Privacy Committee’s draft report on RFID and human tracking, and you and I have spoken about it briefly.  I wanted to offer a response in writing, and make my thinking available to others, because you and CDT are important figures in discussions like this.

First, I think it’s important to put the burden of proof in the right place.  When DHS proposes a change as significant as moving to radio-frequency-based (RF), digital human identification systems, the burden of proof is on the DHS to show why they should be adopted.  The burden is not on the Committee to show why they should not.

The use of digital methods to identify people is a sea change in the process of identification.  You know well, because you have written on these subjects extensively, that digital technologies make it very easy to collect, store, copy, transfer, and re-use personal information.  The leading identification systems being proposed and deployed for use on Americans are not just digital – they go a step further and use radio frequency technology of various stripes. 

Digital identification systems, such as the government-mandated RF systems we discuss generally in the report, have entirely different consequences for privacy from the analog and visual identification methods primarily used in government ID up to this point.  We begin to explore these consequences in the report. 

The report tries to confine itself to the concerns created by the addition of RF because trying to reach all the concerns with government-mandated digital ID systems is such a formidable task and because RF systems are the leading ones under consideration and development. 

Which brings me to a second important point: These systems are being designed, built, and implemented right now

The DHS components that want to use RFID to track people are not awaiting the study or studies you propose.  The Privacy Committee’s role is to call out important privacy issues at relevant times and the draft report on using RFID for human tracking does that. 

If you wish to step back and ponder the issues, you are welcome to, but the inference I draw from your letter – that we should delay or suspend the Committee’s report on use of RFID for human tracking – would make the Committee a full participant in a program planning scenario we see too often in Washington, D.C.:  “Ready … fire … AIM!”

As you point out, the draft report does not reach every concern with every system, nor the detailed differences among them.  But it is not the job of the Committee to perform the in-depth study or studies you suggest. That is the job of the Department of Homeland Security components that seek to deploy these systems.

The members of the drafting subcommittee sought information about these systems and the privacy issues associated with them, and considered everything we were told and given by industry, privacy advocates, members of the public, and DHS components.  The information we have leads us fairly and accurately to conclude that the merits (and, through cost-benefit comparison, the net benefits) of these systems have not been shown.

I won’t belabor the specifics of all you invite the Committee to study in your comments, but I was particularly struck by your challenge to us to substantiate the following statement from the draft report:  “Without formidable safeguards, the use of RFID in identification cards and tokens will tend to enable the tracking of individuals’ movements, profiling of their activities and subsequent, non-security-related use of identification and derived information.”

Jim, we have yet to see an RF human identification system that does not collect and store information about every American subject to it for at least 75 years. You know that data collections this deep, held for periods of time this long, tend to find new, unanticipated, and often undesirable uses.  This is but one of the concerns with these systems.

Your letter is awfully sanguine for an organization that advocates for civil liberties and democratic values.  If CDT plans to do a “full and objective” assessment of RFID’s use in human tracking, I would be happy to help bring you up to speed.

 

Jim Harper
Director of Information Policy Studies
The Cato Institute

Google Office vs. Government “Request”

TechCrunch is a terrific blog covering new Internet products and companies.  Edited by Michael Arrington, it’s a clearinghouse of information on ”Web 2.0” - the agglomeration of innovations that could take online life and business through their next leaps forward.

In this recent post, TechCrunch briefly assessed some concerns with Google’s office strategy.  Google has online offerings in the works that could substitute for the word processing and spreadsheet software on your computer - just like Gmail did with e-mail.

And just like Gmail, documents and information would remain on Google’s servers so they can be accessed anywhere.  This is a great convenience, but brings with it several problems, namely: 

The fact that unauthorized document access is a simple password guess or government “request” away already works against them. But the steady stream of minor security incidents we’ve seen (many very recently) can also hurt Google in the long run.

Arrington’s post goes on to highlight a series of small but significant security lapses at Google.  If Google wants companies and individuals to store sensitive data on their servers, they have to be pretty near perfect - or better than perfect.

Then there is government “request.” Arrington makes appropriate use of quotation marks to indicate irony.  Governments rarely “request” data in the true sense of that term.  Rather, they require its disclosure various ways - by warrant or subpoena, for example, by issuing “national security letters,” or by making a technical “request” that is backed by the implicit threat of more direct action or regulatory sanctions.

On resisting government demands for data, Google has been better than most - an awfully low hurdle.  It opposed a subpoena for data about users’ searches earlier this year.  But Google has a long way to go if it wants people to believe that leaving data in their hands does not provide easy (and secret) access to the government.  Indeed, thanks to the recently passed cybercrime treaty, doing so may well provide access to foreign governments, opening the door to corporate espionage and any number of other threats.

At a meeting of the Department of Homeland Security’s Data Privacy and Integrity Advisory Committee in San Francisco last July, I asked Google Associate General Counsel Nicole Wong what the company is doing about its ability to protect information from government “request,” given the sorry state of Fourth Amendment law with respect to personal information held by third parties.  Her answer, which I must summarize because the transcript is not yet online, amounted to “not much.”  (Eventually, the transcript should be linked from here.)

Google has issued a “me too” about an effort to invite regulation of itself.  That project is going nowhere, but if it did get off the ground, it would do nothing about government access to the information that Google holds for its customers. 

Government access to data is a big flaw in Google’s nascent effort to move into online productivity services.

Net Regulation Proponent Concedes: Markets Work

Other than the religious devotees of regulation, most observers of the drive for “network neutrality” regulation have recognized that the essential question is whether there is sufficient competition among broadband providers.  If there is enough competition, broadband providers can’t use their market power to do bad things to consumers and public utility regulation of broadband is not needed.

Columbia law professor and champion of net neutrality regulation Tim Wu is quoted in the October 14 Economist admitting consumers’ power to influence broadband providers:

“The public reaction has already been as powerful and effective as any law,” says Timothy Wu, a professor at Columbia Law School who is credited with coining the term “net neutrality”. The debate has put the telecoms companies on notice that they are being watched closely, he says, and has forced them to make public pledges not to block or degrade access. “Shame can have more power than litigation,” says Mr Wu. “The market and consumers can control bad practices, but consumers actually have to be aware of what is going on for that to happen.”

It’s an interesting strategic and ethical question whether brandishing the regulation cudgel is appropriate, but as long as it’s agreed that consumers have influence in the broadband marketplace, that question can wait for another day.

Getting Data Breach in Perspective

Indiana University law professor and cybersecurity/informatics expert Fred Cate wrote sensibly in this weekend’s Washington Post about data security and identity fraud.  “The fact is that few if any [data] breaches lead to identity theft or other consumer injuries.”

When a Department of Veterans Affairs laptop with data on 26.5 million veterans was stolen earlier this year, VA notified all of them and asked Congress for $160.5 million to cover the cost of one year of credit monitoring.  Even if the laptop had not been returned (the data untouched), this reaction would have been overkill.

Washington has a hard time responding to problems dispassionately and proportionately.  If only this failing could be the crisis du jour - even just for a day.