Topic: Telecom, Internet & Information Policy

Still Contemptuous of the Court, TSA Doesn’t Even Try to Justify its Strip-Search Machine Policy

It took the Transportation Security Administration 20 months to comply with a D.C. Circuit Court of Appeals order requiring it to issue a justification for its policy of using strip-search machines for primary screening at airports and to begin taking comments from the public.

In that time, it came up with a 53-page (double-spaced) notice of proposed rulemaking. That’s 2.65 double-spaced pages per month.

This may be the most carefully written rulemaking document in history. We’ll be discussing it next week at an event entitled: “Travel Surveillance, Traveler Intrusion.” Register now!

The TSA’s strip-search machine notice will be published in the Federal Register tomorrow, and the public will have 90 days to comment. The law requires the agency to consider those public comments before it finalizes its policies. If the comments reveal the TSA’s policies to be arbitrary or capricious, the policies can be struck down.

But what is there to comment on? The TSA’s brief document defends a hopelessly vague policy statement instead of the articulation that the court asked for. And as to the policy we all know it’s implementing, TSA hides behind the skirts of government secrecy.

When the court found that the TSA was supposed to take comment from the public, it wanted a clearer articulation of what rules apply at the airport. The court’s ruling itself devoted several paragraphs to the policy and how it affects American travelers.

[T]he TSA decided early in 2010 to use the scanners everywhere for primary screening. By the end of that year the TSA was operating 486 scanners at 78 airports; it plans to add 500 more scanners before the end of this year.

No passenger is ever required to submit to an AIT scan. Signs at the security checkpoint notify passengers they may opt instead for a patdown, which the TSA claims is the only effective alternative method of screening passengers. A passenger who does not want to pass through an AIT scanner may ask that the patdown be performed by an officer of the same sex and in private. Many passengers nonetheless remain unaware of this right, and some who have exercised the right have complained that the resulting patdown was unnecessarily aggressive.

The court wanted a rulemaking on this policy. In the jargon of administrative procedure, the court demanded a “legislative rule,” something that reasonably details the rights of the public and what travelers can expect when they go to the airport.

Instead, the TSA has produced a perfectly vague policy statement that conveys nothing about what law applies at the airport. In the regulations that cover screening and inspection, the TSA simply wants to add:

(d) The screening and inspection described in (a) may include the use of advanced imaging technology. For purposes of this section, advanced imaging technology is defined as screening technology used to detect concealed anomalies without requiring physical contact with the individual being screened.

Not a word about the use of strip-search machines as primary screening. Nothing about travelers’ options. Nothing about signage. Nothing about the procedures for opt-outs. Nothing about what a person can do if they have a complaint. It’s not a regulation. It’s a restatement of “we do what we want.”

That’s contemptuous of the court’s order requiring TSA to inform the public, take comments, and consider those comments in formulating a final rule. TSA is doing everything it can to make sure that the airport is a constitution-free zone, and this time it’s lifting a middle finger to the D.C. Circuit Court of Appeals.

It is possible, even in a relatively short document, to articulate how billions of dollars spent on exposing the bodies of millions of law-abiding Americans makes the country better off. What’s amazing about the document is how little it says. TSA doesn’t even try to justify its strip-search machine policy. Instead, it plays the govenment secrecy trump card.

Here is everything TSA says about how strip-search machines (or “AIT” for “advanced imaging technology”) make air travel safer:

[R]isk reduction analysis shows that the chance of a successful terrorist attack on aviation targets generally decreases as TSA deploys AIT. However, the results of TSA’s risk-reduction analysis are classified.

Balderdash.

Under Executive Order 135256, classification is permitted if “disclosure of the information reasonably could be expected to result in damage to the national security, which includes defense against transnational terrorism.”

“If there is significant doubt about the need to classify information,” the order continues, “it shall not be classified.”

Assessing the costs and benefits of TSA’s policies cannot possibly result in damage to national security. The reason I know this? It’s already been done, publicly, by Mark G. Stewart of the University of Newcastle, Australia, and John Mueller of the Ohio State University. They published their findings in the Journal of Homeland Security and Emergency Management in 2011, and national security is none the worse.

Walking through how well policies and technologies produce security can be done without revealing any intelligence about threats, and it can be done without revealing vulnerabilities in the policy and technology. But the TSA is playing the secrecy trump card, hoping that a gullible and fearful public will simply accept their authority.

I anticipated that the agency might try this tactic when the original order to engage in a public rulemaking came down in mid-2011. In a Cato blog post, I wrote:

Watch in the rulemaking for the TSA to obfuscate, particularly in the area of threat, using claims to secrecy. “We can’t reveal what we know,” goes the argument. “You’ll have to accept our generalizations about the threat being ‘substantial,’ ‘ever-changing,’ and ‘growing.’” It’s an appeal to authority that works with much of the American public, but it is not one to which courts—a co-equal branch of the government—should so easily succumb.

If it sees it as necessary, the TSA should publish its methodology for assessing threats, then create a secret annex to the rulemaking record for court review containing the current state of threat under that methodology, and how the threat environment at the present time compares to threat over a relevant part of the recent past. A document that contains anecdotal evidence of threat is not a threat methodology. Only a way of thinking about threat that can be (and is) methodically applied over time is a methodology.

The TSA published nothing, and it hopes to get past the public and the courts with that.

Its inappropriate and undeniably overbroad use of secrecy will be in our comments to the agency and the legal appeal that will almost certainly follow.

Crucially, agency actions like this are subject to court review. When the TSA finalizes its rules, a court will “decide all relevant questions of law, interpret constitutional and statutory provisions, and determine the meaning or applicability of the terms of an agency action.” Sooner or later, we’ll talk about whether TSA followed the court’s order, the lawfulness of wrapping its decision-making in secrecy, and the arbitrary nature of a policy that has no public justification.

It’s All About the Authors

In anticipation of a hearing in the House Judiciary Committee Wednesday afternoon, Sandra Aistars, executive director of the Copyright Alliance, writes in the Hill about the principles that should guide copyright reform, calling for debate “based in reality rather than rhetoric.”

Chief among these principles is that protecting authors is in the public interest. Ensuring that all creators retain the freedom of choice in determining how their creative work is used, disseminated and monetized is vital to protecting freedom of expression.

Arguing for authors in terms of freedom of choice and expression is good rhetoric, but it’s quite unlike what I expect you’ll hear during our noon Wednesday forum on copyright and the book Laws of Creation: Property Rights in the World of Ideas.

Authors Ron Cass and Keith Hylton methodically go through each intellectual property doctrine and explore its economic function, giving few words to authors’ “choice” or their “freedom of expression.” They certainly don’t denigrate authors or their role, but Cass and Hylton don’t vaunt them the way Aistars does either.

Recent events in the copyright area are providing much grist for the discussion. You can still register for the book forum, treating it as a warm-up for Wednesday afternoon’s hearing, if your freedom of choice and expression so dictate.

Beware the Data, II

A couple of months ago I warned about the dangers of having government gather and publish growing reams of information in the name of making education better. Sure, it sounds great – help people get as informed as possible! – but the dangers are legion. You can read about several such pitfalls in that old post. You can also get a sense of the great wealth of data already out there in this op-ed. What I haven’t discussed – and what might concern many Americans more than anything else – is the threat that massive data collection poses to our privacy.

Articles over the last week or so have started to draw significant attention to the growing education-information complex and its connection to long-standing efforts – especially federal – to accumulate information on Americans from birth to boardroom. Gaining particular traction has been a story about how student data collected in New York could be sold to companies or other entities outside of school districts. Even more concerning is a story by Joy Pullmann in the Orange County Register about lots of data collection and mining that is either already happening or under consideration nationwide.

What’s especially troubling to some people, including Pullmann, is that not only is there ever-growing centralization of curricula such as the federally backed Common Core, as well as centralized testing of knowledge, but there are also moves to assess students’ “affect” that could include wiring them to “facial expression” cameras and “skin conductance sensors.” Contemplating such things, it’s hard not to conjure up images of A Clockwork Orange.

When you read the federal report that proposes using “affective computing methods” such as skin sensors, it doesn’t appear that the authors have nefarious, big-brother intentions. The object of the report is to examine how students’ “grit” and perseverance can be improved, and that is a reasonable goal. Similarly, furnishing information about the academic status of incoming freshmen at a college, the amount they learn while in school, and how well they fare after graduation, is driven by good intentions.

But we must never feel content with good intentions. We must care primarily about the effects of the policies stemming from our golden goals, and as I’ve written previously, there are likely big, negative, immediate effects that would go with empowering more government data collection. There are also potentially even worse long-term consequences, including that government would begin to try to adjust students’ feelings and attitudes if doing so might produce better test scores or some other, politically determined, outcome. Indeed, such affect-engineering arguably already takes place with huge increases in ADD and ADHD diagnoses that lead to personality-altering drug-taking.

It’s easy – and almost always innocent – to say that we need more information so that we can make things work better. But with that comes very big potential dangers we must never ignore.

Cross-posted at seethruedu.com

Debating Intellectual Property

In remarks at the most recent meeting of the President’s Export Council, now-former U.S. trade representative Ron Kirk (he just stepped down yesterday) stated (see 25:10 of the video):

We have a knowledge-based economy, and we have to protect that. … We have to have the strongest intellectual property protection that we can possibly seek in these [trade]  agreements. … We are failing miserably in the public debate about the importance of protecting our intellectual property rights. … Somehow we [need] to fashion an argument for the American public that helps them to understand that if we give away our work product, we just don’t have  a future.

I agree that there should be some protection of intellectual property. But how much? I think a public debate about these issues would be great. From what I can see from my perspective in the trade world, there’s almost never a real discussion of this issue. Instead, there is just constant pushing from the U.S. government for stronger intellectual property protection.

If there is going to be a debate, here are some questions I have:

  • Why should patent terms be 20 years rather than, say, 10 or 30 years?  The 20-year term seems arbitrary, and I’d like to see some evidence that this is the right one. And are there some products that should not be eligible for patents?
  • Why should copyright terms be the life of the author plus 70 years? They used to be much shorter. Has copyright become unbalanced
  • Is there room for different views among different countries? Should the U.S. government be pushing other countries to adopt our model?

I don’t think that whether we should “give our work product away” is the right way to frame the issue. Rather, the question is, how much protection should intellectual property be given? By all means, let’s have a public debate about that.

A Copyright Comeback?

Register here now for next Wednesday’s book forum.

There is certainly excellent Cato work on copyright and intellectual property that predates mine, but the starting point for my work in the area was the 2006 “Copyright Controversies” conference. Along with considering whether copyright is founded in natural law or utilitarian considerations, we examined the challenges to copyright posed by emerging modes of creation and by enforcement issues.

Since then, I’ve made it my practice to periodically return to copyright, intellectual property law, and other information regulations when I’ve come across a new book that brings new ideas to the table.

At our most recent book event, on the Mercatus book Copyright Unbalanced: From Incentive to Excess, the case for copyright reform made by Cato alumni Jerry Brito and Tom W. Bell was met with a strong, first-principles defense of copyright by Mitch Glazier.

Now comes Laws of Creation: Property Rights in the World of Ideas, in which Ronald A. Cass and Keith Hylton reject the idea that changing technology undermines the case for intellectual property rights. They argue that making the work of inventors and creators free would be a costly mistake.

Between Glazier’s performance and this new book, perhaps the intellectual tide is turning back to support for copyright and intellectual property law. But two data points are probably not enough to identify a trend.

On March 20th, we’ll have Cass and Hylton at Cato to present their work, with Jerry Brito providing commentary. It’s up to you do decide for yourself whether copyright is making a comeback. The question is especially acute with the recent ruling that unlocking one’s cell phone in order to use it on another network is illegal.

Register now!

Making Sense of Drug Violence in Mexico with Big Data, New Media, and Technology

Yesterday we hosted a very interesting event with Google Ideas about the use of new media and technology information in Mexico’s war on drugs. You can watch the whole thing in the video below.

Unfortunately, one of the biggest casualties from the bloodshed that besets Mexico is freedom of the press. Drug cartels have targeted traditional media outlets such as TV stations and newspapers for their coverage of the violence. Mexico is now the most dangerous country to be a journalist. However, a blackout of information about the extent of violence has been avoided because of activity on Facebook pages, blogs, Twitter accounts, and YouTube channels.

Our event highlighted the work of two Mexican researchers on this topic. Andrés Monroy-Hernández from Microsoft Research presented the findings of his paper “The New War Correspondents: The Rise of Civic Media Curation in Urban Warfare” which shows how Twitter has replaced traditional media in several Mexican cities as the primary source of information about drug violence. Also, we had Javier Osorio, a Ph.D. candidate from Notre Dame University, who has built original software that tracks the patterns of drug violence in Mexico using computerized textual annotation and geospatial analysis.

Our third panelist was Karla Zabludovsky, a reporter from the New York Times’ Mexico City Bureau, who talked about the increasing dangers faced by journalists in Mexico and the challenges that new media represent in covering the war on drugs in that country.

Even though Enrique Peña Nieto, Mexico’s new president, has focused the narrative of his presidency on economic reform, the war on drugs continues to wreak havoc in Mexico. Just in the first two months of the year over 2,000 people have been killed by organized crime. 

At the Cato Institute we closely keep track of developments in Mexico and we have published plenty of material on the issue, including:

Watch the full event:

And for those who speak the language of Cervantes, here’s a ten minute interview that Karla Zabludovsky and I did on CNN en Español about the Cato event.

Google Illuminates the Shadowy World of National Security Letters

In a pretty much unprecedented move, Google today announced that it was expanding its regular “Transparency Report” to include some very general information about government demands for user information using National Security Letters, which can be issued by the head of any of 56 FBI field offices without judicial approval or supervision. Recipients of NSLs are typically forbidden from ever revealing even the existence of the request, and therefore not included in the company’s general tally of government surveillance requests. Instead of disclosing specific numbers of NSL requests, then, Google is publishing a wide range indicating the rough volume of requests they get each year, and how many users are affected. Broad as these ranges are, there’s some interesting points to be gleaned here:

NSL's Google has received since 2009

It’s illuminating to compare the minimum number of users affected by NSLs each year to the numbers we find in the government’s official annual reports. In 2011—the last year for which we have a tally—the Justice Department acknowledged issuing 16,511 NSLs seeking information about U.S. persons, with a total of 7,201 Americans’ information thus obtained. That’s actually down from a staggering 14,212 Americans whose information DOJ reported obtaining via NSL the previous year. Remember, this total includes National Security Letters issued not just to all telecommunications providers—including online services like Google, broadband Internet companies, and cell phone carriers—but also “financial institutions,” which are defined broadly to include a vast array of businesses beyond such obvious candidates as banks and credit card companies.

What ought to leap out at you here is the magnitude of Google’s tally relative to that total: They got requests affecting at least 1,000 users in a year when DOJ reports just over 7,000 Americans affected by all NSLs—and it seems impossible that Google could account for anywhere remotely near a seventh of all NSL requests. Google, of course, is not limiting their tally to requests for information about Americans, which may explain part of the gap—but we know that, at least of a few years ago, the substantial majority of NSLs targeted Americans, and the proportion of the total targeting Americans was increasing year after year. As of 2006, for instance, 57 percent of NSL requests were for information about U.S. persons. So even if we reduce Google’s minimum proportionately, that seems awfully high.

There’s a simple enough explanation for this apparent discrepancy: The numbers DOJ reports each year explicitly exclude NSL requests for “basic subscriber information,” meaning the “name, address, and length of service” associated with an account, and only count more expansive requests that also demand more detailed “electronic communications transactional records” that are “parallel to” the “toll billing records” maintained by traditional phone companies. I’ll get back to what that means in a second. But the obvious inference from comparing these numbers, unless Google gets a completely implausibly disproportionate percentage of total NSLs, is that the overwhelming majority of NSLs are just such “basic subscriber information” requests, and that the total number of Americans affected by all NSLs is thus vastly, vastly larger than the official numbers would suggest.

The rationale for not counting such “basic subscriber information” requests—beyond a desire not to terrify Americans by exposing the true magnitude of government surveillance—is presumably that these are so limited in scope that they don’t pose the same kind of civil liberties concerns as more extensive data requests. But this may not really be the case when you think about how we use the Internet in practice: Many people, after all, go online to engage in anonymous speech. In those cases, the contents of a person’s communications may be public (or at least widely shared), and what’s sensitive and private is the identity of the person tied to a particular account. (The first step in the FBI investigation that ultimately brought down CIA chief David Petraeus, recall, was stripping away the digital anonymity of his biographer and lover, Paula Broadwell, by linking a pseudonymous e-mail address to her primary Google account.) Indeed, that seems to be the primary reason one would issue such a “basic subscriber information” request to an entity like Google: To effectively de-anonymize the otherwise unknown user of a particular account. Insofar as the right to both speak and read or recieve information anonymously has long been recognized by the Supreme Court as a component of our basic First Amendment freedoms, even these relatively limited requests may indeed have important implications for our civil liberties. And Google’s numbers, imprecise as they are, very strongly suggest that such requests are issued in far higher numbers than had previously been recognized.

The other interesting tidbit to come from Google today is their expanded FAQ detailing what kinds of information can be obtained under NSLs:

Under the Electronic Communications Privacy Act (ECPA) 18 U.S.C. section 2709, the FBI can seek “the name, address, length of service, and local and long distance toll billing records” of a subscriber to a wire or electronic communications service. The FBI can’t use NSLs to obtain anything else from Google, such as Gmail content, search queries, YouTube videos or user IP addresses.

For a long time, the FBI operated on the assumption that NSLs could be used broadly to obtain any “electronic communications transactional records.” But in a 2008 memorandum, the Office of Legal Counsel rejected that interpretation, holding that NSL authority “reaches only those categories of information parallel to subscriber information and toll billing records for ordinary telephone service.” Just what that means, of course, is fairly opaque—but I think most observers had supposed, as I had, that it encompassed user IP addresses. Since these can be crucial to linking a wide array of online activity to a particular user, their exclusion would somewhat limit the potential of NSLs to undermine Internet anonymity. Whether IPs are covered, however, may well depend on the specific service in question—and it is not at all clear whether other providers will disclose IP addresses in response to NSLs.

Of course, what Google does not specify clearly is just what information does fall into the category of “toll billing records.” In all likelihood, however, it covers the equivalent of the kind of information about who is communicating with whom that might be found on a phone bill—such as a list of all the people with whom you exchange e-mails or Gchat instant messages, though again, given differences in how people use the Internet versus traditional phone service, such lists are likely to be substantially more revealing than any phone bill.