Tag: bruce schneier

It’s Time to Break up the NSA

says security guru Bruce Schneier on CNN.com.

His brief, readable piece articulates the three distinct – and conflicting – missions the NSA now has, and how they should be handled. It’s no hit piece: Schneier calls NSA’s Tailored Access Operations group “the best of the NSA and … exactly what we want it to do.”

The generals who have built NSA into a fiefdom will fight tooth and nail against true reforms like these, of course, but they’re the kind of reforms we need. The most prominent measures under discussion are mere nibbles around the edges of the problem, or worse.

How’s That Oversight Coming Along?

One of the claims made by defenders of NSA spying is that it’s overseen and approved by all three branches of the federal government.

Computer security expert Bruce Schneier provides some insight into how well congressional oversight is working in a short blog post entitled: “Today I Briefed Congress on the NSA.”

This morning I spent an hour in a closed room with six Members of Congress: Rep. Logfren, Rep. Sensenbrenner, Rep. Scott, Rep. Goodlate, Rep Thompson, and Rep. Amash. No staffers, no public: just them. Lofgren asked me to brief her and a few Representatives on the NSA. She said that the NSA wasn’t forthcoming about their activities, and they wanted me – as someone with access to the Snowden documents – to explain to them what the NSA was doing.

Many members of Congress have been derelict for years in not overseeing the National Security Agency. Now some members of Congress are asking questions, and they’re being stonewalled.

It’s the government so…

I suggested that we hold this meeting in a SCIF, because they wanted me to talk about top secret documents that had not been made public. The problem is that I, as someone without a clearance, would not be allowed into the SCIF.

Randy Barnett and I made the case last fall that the panels of judges who approve domestic spying under the Foreign Intelligence Surveillance Act should not be regarded as legitimate courts. Their use to dispose of Americans’ rights violates due process.

And the executive branch? Here’s President Obama: “I mean, part of the problem here is we get these through the press and then I’ve got to go back and find out what’s going on…”

How’s that tri-partite oversight coming along?

Libertarians Shouldn’t Want Perfect Security—Reply to Professor Epstein

I was pleased to see last week that Professor Epstein had penned a response to my criticism of his recent piece on Hoover’s Defining Ideas in which he argued against treating protection of civil liberties and privacy as “nonnegotiable” in the context of counterterrorism. It is not the disagreement that is pleasing, of course, but the opportunity to air it, which can foster discussion of these issues among libertarians while illustrating to the broader world how seriously libertarians take both security and liberty.

What’s most important in Professor Epstein’s rejoinder is what comes at the end. He says that I should “comment constructively on serious proposals” rather than take an a priori position that civil liberties and privacy will often impede expansions of government power proposed in the name of counterterrorism.

I believe that Professor Epstein and I share the same prior commitments–to limited government, free markets, and peace. Having left it implicit before, I’ll state that I, too, believe that protection of life and property is the primary function of the state. But I also believe that excesses in pursuit of security can cost society and our liberties more than they produce in benefits.

Some years of work on counterterrorism, civil liberties, and privacy bring me to my conclusions. I had put in a half-decade of work on privacy before my six years of service on the Department of Homeland Security’s privacy advisory committee began in 2005. While interacting with numerous DHS components and their programs, I helped produce the DHS Privacy Committee’s risk-management-oriented “Framework for Privacy Analysis of Programs, Technologies, and Applications.” From time to time, I’ve also examined programs in the Science and Technology Directorate at DHS through the Homeland Security Institute. My direct knowledge of the issues in counterterrorism pales in comparison to the 30+ experts my Cato colleagues and I convened in private and public conferences in 2009 and 2010, of course, but my analysis benefitted from that experience and from co-editing the Cato book: Terrorizing Ourselves: Why U.S. Counterterrorism Policy is Failing and How to Fix It.

Whether I’m operating from an inappropriate a priori position or not, I don’t accept Professor Epstein’s shift of the burden. I will certainly comment constructively when the opportunity arises, but it is up to the government, its defenders, and here Professor Epstein to show that security programs are within the government’s constitutional powers, that such programs are not otherwise proscribed by the constitution, and that they cost-effectively make our society more secure.

The latter two questions are collapsed somewhat by the Fourth Amendment’s requirement of reasonableness, or “fit” between means and ends when a search or seizure occurs. And to the extent I can discern the program that Professor Epstein prefers, I have commented on it as constructively as I can.

TSA Behavioral Screening

Behavioral screening is a useful tool in deterring and preventing terrorist attacks. As I noted in this piece at Politico, a border patrol agent successfully used behavioral screening to stop the would-be Millennium Bomber. She noticed something “hinky” about a man driving south across the Canadian border. That “hinky” – fidgety and nervous behavior when asked routine customs questions – exposed a car full of explosives intended for the passenger terminal of Los Angeles International Airport.

Two items from the USA Today travel section highlight some mixed results with TSA behavioral screening. Today’s edition reports that behavioral screening, applied by Behavioral Detection Officers (BDOs) missed at least 16 people later linked to terror plots. On the other side of the equation, false positives can impose burdens on those who are nervous or upset for reasons other than terrorism aspirations.

The TSA Blog defended the program: “If you’re one of those travelers that gets frazzled easily (not hard to do at airports), you have no reason to worry. BDOs set a baseline based on the normal airport behavior and look for behaviors that go above that baseline. So if you’re stressing about missing a flight, that’s not a guaranteed visit from the BDOs.”

That would be reassuring if yesterday’s travel section hadn’t revealed that TSA screeners are keeping a list of those who get upset at intrusive screening procedures. “Airline passengers who get frustrated and kick a wall, throw a suitcase or make a pithy comment to a screener could find themselves in a little-known Homeland Security database.”

Of course, we can take comfort from the words of a TSA screener to security expert Bruce Schneier. “This isn’t the sort of job that rewards competence, you know.”

Are You Substituting Worst-Case Thinking for Reason?

Bruce Schneier has a typically good essay on the use of “worst-cases” as a substitute for real analysis. I noticed conspicuous use of “worst-case” in early reporting on the oil spill in the Gulf. It conveniently gins up attention for media outlets keen on getting audience.

There’s a certain blindness that comes from worst-case thinking. An extension of the precautionary principle, it involves imagining the worst possible outcome and then acting as if it were a certainty. It substitutes imagination for thinking, speculation for risk analysis and fear for reason. It fosters powerlessness and vulnerability and magnifies social paralysis. And it makes us more vulnerable to the effects of terrorism.

Worst-case thinking—the failure to manage risk through analysis of costs and benefits—is what makes airline security such an expensive nightmare, for example. Schneier concludes:

When someone is proposing a change, the onus should be on them to justify it over the status quo. But worst case thinking is a way of looking at the world that exaggerates the rare and unusual and gives the rare much more credence than it deserves. It isn’t really a principle; it’s a cheap trick to justify what you already believe. It lets lazy or biased people make what seem to be cogent arguments without understanding the whole issue.

It’s not too long for you to read the whole thing.

‘The Dumbest Terrorist In the World’?

Businessweek has a story quoting a former federal prosecutor in Brooklyn, Michael Wildes, speculating that Faisal Shahzad, the would-be Times Square bomber, made so many mistakes (leaving his house keys in the car, not knowing about the vehicle identification number, making calls from his cellphone, getting filmed, buying the car himself) that he may be the “dumbest terrorist in the world.” But Wildes can’t accept the idea that an al Qaeda type terrorist would be so incompetent and suggests that Shahzad was “purposefully hapless” to generate intelligence about the police reaction for the edification of his buddies back in Pakistan.

Give me a break. This incompetence is hardly unprecedented. Three years ago Bruce Schneier wrote an article titled “Portrait of the Modern Terrorist as an Idiot,” describing the incompetence of several would-be al Qaeda plots in the United States and castigating commentators for clinging to image of these guys as Bond-style villains that rarely err.  It’s been six or seven years since people, including me, started pointing out that al Qaeda was wildly overrated. Back then, most people used to say that the reason al Qaeda hadn’t managed a major attack here since September 11 was because they were biding their time and wouldn’t settle for conventional bombings after that success. We are always explaining away our enemies’ failure.

The point here is not that all terrorists are incompetent – no one would call Mohammed Atta that – or that we have nothing to worry about. Even if all terrorists were amateurs like Shahzad, vulnerability to terrorism is inescapable. There are too many propane tanks, cars, and would-be terrorists to be perfectly safe from this sort of attack. The same goes for Fort Hood.

The point is that we are fortunate to have such weak enemies. We are told to expect nuclear weapons attacks, but we get faulty car bombs. We should acknowledge that our enemies, while vicious, are scattered and weak. If we paint them as the globe-trotting super-villains that they dream of being, we give them power to terrorize us that they otherwise lack. As I must have said a thousand times now, they are called terrorists for a reason.  They kill as a means to frighten us into giving them something.

The guys in Waziristan who trained Shahzad are probably embarrassed to have failed in the eyes of the world and would be relieved if we concluded that they did so intentionally. Likewise, it must have heartened the al Qaeda group in Yemen when the failed underwear bomber that they sent west set off the frenzied reaction that he did.  Remember that in March, al Qaeda’s American-born spokesperson/groupie Adam Gadahn said this:

Even apparently unsuccessful attacks on Western mass transportation systems can bring major cities to a halt, cost the enemy billions and send his corporations into bankruptcy.

As our enemies realize, the bulk of harm from terrorism comes from our reaction to it.  Whatever role its remnants or fellow-travelers had in this attempt, al Qaeda (or whatever we want to call the loosely affiliated movement of internationally-oriented jihadists) is failing. They have a shrinking foothold in western Pakistan, maybe one in Yemen, and little more. Elsewhere they are hidden and hunted. Their popularity is waning worldwide. Their capability is limited. The predictions made after September 11 of waves of similar or worse attacks were wrong. This threat is persistent but not existential.

This attempt should also remind us of another old point: our best counterterrorism tools are not air strikes or army brigades but intelligence agents, FBI agents, and big city police.  It’s true that because nothing but bomber error stopped this attack, we cannot draw strong conclusions from it about what preventive measures work best. But the aftermath suggests that what is most likely to prevent the next attack is a criminal investigation conducted under normal laws and the intelligence leads it generates. Domestic counterterrorism is largely coincident with ordinary policing. The most important step in catching the would-be bomber here appears to have been getting the vehicle identification number off the engine and rapidly interviewing the person who sold it. Now we are seemingly gathering significant intelligence about bad actors in Pakistan under standard interrogation practices.

These are among the points explored in the volume Chris Preble, Jim Harper and I edited: Terrorizing Ourselves: Why U.S. Counterterrorism Policy is Failing and How to Fix It – now hot off the presses. Contributors include Audrey Kurth Cronin, Paul Pillar, John Mueller, Mia Bloom, and a bunch of other smart people.

We’re discussing the book and counterterrorism policy at Cato on May 24th,  at 4 PM. Register to attend or watch online here.

Making Sense of New TSA Procedures

Since they were announced recently, I’ve been working to make sense of new security procedures that TSA is applying to flights coming into the U.S.

“These new measures utilize real-time, threat-based intelligence along with multiple, random layers of security, both seen and unseen, to more effectively mitigate evolving terrorist threats,” says Secretary Napolitano.

That reveals essentially nothing of what they are, of course. Indeed, “For security reasons, the specific details of the directives are not public.”

But we in the public aren’t so many potted plants. We need to know what they are, both because our freedoms are at stake and because our tax money will be spent on these measures.

Let’s start at the beginning, with identity-based screening and watch-listing in general. A recent report in the New York Times sums it up nicely:

The watch list is actually a succession of lists, beginning with the Terrorist Identities Datamart Environment, or TIDE, a centralized database of potential suspects.  … [A]bout 10,000 names come in daily through intelligence reports, but … a large percentage are dismissed because they are based on “some combination of circular reporting, poison pens, mistaken identities, lies and so forth.”

Analysts at the counterterrorism center then work with the Terrorist Screening Center of the F.B.I. to add names to what is called the consolidated watch list, which may have any number of consequences for those on it, like questioning by the police during a traffic stop or additional screening crossing the border. That list, in turn, has various subsets, including the no-fly list and the selectee list, which requires passengers to undergo extra screening.

The consolidated list has the names of more than 400,000 people, about 97 percent of them foreigners, while the no-fly and selectee lists have about 6,000 and 20,000, respectively.

After the December 25, 2009 attempted bombing of a Northwest Airlines flight from Amsterdam into Detroit, TSA quickly established, then quickly lifted, an oppressive set of rules for travelers, including bans on blankets and on moving about the cabin during the latter stages of flights. In the day or two after a new attempt, security excesses of this kind are forgivable.

But TSA also established identity-based security rules of similar provenance and greater persistence, subjecting people from fourteen countries, mostly Muslim-dominated, to special security screening. This was ham-handed reaction, increasing security against the unlikely follow-on attacker by a tiny margin while driving wedges between the U.S. and people well positioned to help our security efforts.

Former DHS official Stewart Baker recently discussed the change to this policy on the Volokh Conspiracy blog:

The 14-country approach wasn’t a long-term solution.  So some time in January or February, with little fanfare, TSA seems to have begun doing something much more significant.  It borrowed a page from the Customs and Border Protection playbook, looking at all passengers on a flight, running intelligence checks on all of them, and then telling the airlines to give extra screening to the ones that looked risky.

Mark Ambinder lauded the new policy on the Atlantic blog, describing it thusly:

The new policy, for the first time, makes use of actual, vetted intelligence. In addition to the existing names on the “No Fly” and “Selectee” lists, the government will now provide unclassified descriptive information to domestic and international airlines and to foreign governments on a near-real time basis.

Likely, the change is, or is very much like, applying a Customs and Border Patrol program called ATS-P (Automated Targeting System - Passenger) to air travel screening.

“[ATS-P] compares Passenger Name Record (PNR) and information in [various] databases against lookouts and patterns of suspicious activity identified by analysts based upon past investigations and intelligence,” says this Congressional Research Service report.

“It was through application of the ATS-P that CBP officers at the National Targeting Center selected Umar Farouk Abdulmutallab, who attempted to detonate an explosive device on board Northwest Flight 253 on December 25, 2009, for further questioning upon his arrival at the Detroit Metropolitan Wayne County Airport.”

Is using ATS-P or something like it an improvement in the way airline security is being done? It probably is.

A watch-list works by comparing the names of travelers to the names of people that intelligence has deemed concerning. To simplify, the logic looks like something like this:

If first name=”Cat” (or variants) and last name=”Stevens”, then *flag!*

Using intelligence directly just broadens the identifiers you use, so the comparison (again simplified) might look something like this:

If biography contains “traveled in Yemen” or “Nigerian student” or “consorted with extremists”, then *flag!*

The ability to flag a potential terrorist with identifiers beyond name is a potential improvement. Such a screening system would be more flexible than one that used purely name-based matching. But using more identifiers isn’t automatically better.

The goal—unchanged—is to minimize both false positives and false negatives—that is, people flagged as potential terrorists who are not terrorists, and people not flagged as terrorists who are terrorists. A certain number of false positives are acceptable if that avoids false negatives, but a huge number of false positives will just waste resources relative to the margin of security the screening system creates. Given the overall paucity of terrorists—which is otherwise a good thing—it’s very easy to waste resources on screening.

I used the name “Cat Stevens” above because it’s one of several well known examples of logic that caused false positives. Utterly simplistic identifiers like “traveled in Yemen” will also increase false positives dramatically. More subtle combinations of identifiers and logic can do better. The questions are how far they increase false positives, and whether the logic is built on enough information to produce true positives.

So far as we know, ATS-P has never flagged a terrorist before it flagged the underwear bomber. DHS officials tried once to spin up a case in which ATS-P flagged someone who was involved in an Iraq car-bombing after being excluded from the U.S. However, I believe, as I suggested two years ago, that ATS-P flagged him as a likely visa overstayer and not as a terror threat. He may not have been a terror threat when flagged, as some reports have it that he descended into terrorism after being excluded from the U.S. This makes the incident at best an example of luck rather than skill. That I know of, nobody with knowledge of the incident has ever disputed my theory, which I think they would have done if they could.

The fact that ATS-P flagged one terrorist is poor evidence that it will “work” going forward. The program “working” in this case means that it finds true terrorists without flagging an unacceptable/overly costly number of non-terrorists.

Of course, different people are willing to accept different levels of cost to exclude terrorists from airplanes. I think I have come up with a good way to measure the benefits of screening systems like this so that costs and benefits can be compared, and the conversation can be focused.

Assume a motivated attacker that would eventually succeed. By approximating the amount of damage the attack might do and how long it would take to defeat the security measure, one can roughly estimate its value.

Say, for example, that a particular attack might cause one million dollars in damage. Delaying it for a year is worth $50,000 at a 5% interest rate. Delaying for a month an attack that would cause $10 billion in damage is worth about $42 million.

(I think it is fair to assume that any major attack will happen only once, as it will produce responses that prevent it happening twice. The devastating “commandeering” attack on air travel and infrastructure is instructive. The 9/11 attacks permanently changed the posture of air passengers toward hijackers, and subsequent hardening of cockpit doors has brought the chance of another commandeering attack very close to nil.)

A significant weakness of identity-based screening (which “intelligence-based” screening—if there’s a difference—shares) is that it is testable. A person may learn if he or she is flagged for extra scrutiny simply by traveling a few times. A person who passes through airport security six times in a two-month period and does not receive extra scrutiny can be confident enough on the seventh trip that he or she will not be specially screened. If a person does receive special scrutiny on test runs, that’s notice of being in a suspect category, so someone else should carry out a planned attack.

(“We’ll make traveling often a ground for suspicion!” might go the answer. “False positives,” my rejoinder.)

Assuming that it takes two months more than it otherwise would to recruit and clear a clean-skin terrorist, as Al Qaeda and Al Qaeda franchises have done, the dollar value of screening is $125 million. That is the amount saved (at a 5% interest rate) by delaying for one month an attack costing $15 billion (a RAND corporation estimate of the total cost of a downed airliner, public reactions included).

Let’s say that the agility of having non-name identifiers does improve screening and causes it to take three months rather than two to find a candidate who can pass through the screen. Ignoring the costs of additional false positives (though they could be very high), the value of screening rises to $187.5 million.

(There is plenty of room to push and pull on all these assumptions. I welcome comments on both the assumptions and the logic of using the time-value of delayed attacks to quantify the benefits of security programs.)

A January 2009 study entitled, “Just How Much Does That Cost, Anyway? An Analysis of the Financial Costs and Benefits of the ‘No-Fly’ List,” put the amount expended on “no-fly” listing up to that time at between $300 million and $966 million, with a medium estimate of $536 million. The study estimated yearly costs at between $51 and $161 million, with a medium estimate of $89 million.

The new screening procedures, whose contours are largely speculative, may improve air security by some margin. Their additional costs are probably unknown to anyone yet as false positive rates have yet to be determined, and the system has yet to be calibrated. Under the generous assumption that this change makes it 50% harder to defeat the screening system, the value of screening rises, mitigating the ongoing loss that identity-based screening appears to bring to our overall welfare.

Hey, if you’ve read this far, you’ll probably go one or two more paragraphs…

It’s worth noting how the practice of “security by obscurity” degrades the capacity of outside critics to contribute to the improvement of homeland security programs. Keeping the contours of this system secret requires people like me to guess at what it is and how it works, so my assessment of its strengths and weaknesses is necessarily degraded. As usual, Bruce Schneier has smart things to say on security by obscurity, building on security principles generated over more than 125 years in the field of cryptography.

DHS could tell the public a great deal more about what it is doing. There is no good reason for the security bureaucracy to insist on going mano a mano against terrorism, turning away the many resources of the broader society. The margin of information the United States’ enemies might access would be more than made up for by the strength our security programs would gain from independent criticism and testing.