Tag: Stewart Baker

Bureaucrats and Big-Governmenters Work to Revive Their National ID

There are some rich ironies in a recent Stewart Baker blog post touting the slow crawl toward REAL ID compliance he believes states are making. One of the choicest is that his cheerleading for a national ID appears under a Hoover Institution banner that says “ADVANCING A FREE SOCIETY.”

No, having a national ID would not advance a free society. You could say “ADVANCING A SECURE SOCIETY” but even then you’d be overstating the case. A national ID would reduce the security of individuals massively in the aggregate in exchange for modest and arguable state security gains.

Speaking of which, Baker posts a picture of Mohammed Atta’s Florida driver’s license in his post. The implication is that having a national ID would have prevented the 9/11 attacks. In fact, having a national ID would have caused a mild inconvenience to the 9/11 attackers. Billions of dollars spent, massive aggregate inconvenience to law-abiding American citizens, and a much-more-powerful federal government so that terrorists could be mildly inconvenienced?

One of the greatest ironies is that Baker doesn’t—as he never has—takes on the merits of how and how well a national ID would advance security goals. But the merits don’t matter. Baker’s post provides a nice reminder that the bureaucrats will use their big-government allies to restart their moribund national ID plans if they can. Despite massive public opposition to REAL ID, they’ll try to build it anyway.

An anti-immigration group recently issued a report saying that states are getting on board with REAL ID. (They’re meeting massively reduced REAL ID “milestones” coincidentally, not to meet federal demands.) National ID advocate Jim Sensenbrenner (R-WI) put on a lop-sided show-hearing in the House Judiciary Committee last week, hoping to prop up REAL ID’s decaying body.

As if anyone would believe it, a DHS official said at the hearing that the January 2013 deadline for state compliance would not be extended. Book your tickets now, because there won’t be a damn thing different on the airport come January. The Department of Homeland hasn’t stood by any of its deadlines for REAL ID compliance. If it did, by refusing IDs from non-compliant states at the airport, the public outcry would be so large that REAL ID would be repealed within the week.

REAL ID will never be implemented. That doesn’t stop the federal government from spending money on it, so the bureaucrats keep trying to corral you into their national ID. They get occassional help, and sometimes it even travels under the false flag of “ADVANCING A FREE SOCIETY.”

Should a Congress that Doesn’t Understand Math Regulate Cybersecurity?

There’s a delicious irony in some of the testimony on cybersecurity that the Senate Homeland Security and Governmental Affairs Committee will hear today (starting at 2:30 Eastern — it’s unclear from the hearing’s page whether it will be live-streamed). Former National Security Agency general counsel Stewart Baker flubs a basic mathematical concept.

If Congress credits his testimony, is it really equipped to regulate the Internet in the name of “cybersecurity”?

Baker’s written testimony (not yet posted) says, stirringly, “Our vulnerabilities, and their consequences, are growing at an exponential rate.” He’s stirring cake batter, though. Here’s why.

Exponential growth occurs when the growth rate of the value of a mathematical function is proportional to the function’s current value. It’s nicely illustrated with rabbits. If in week one you have two rabbits, and in week two you have four, you can expect eight rabbits in week three and sixteen in week four. That’s exponential growth. The number of rabbits each week dictates the number of rabbits the following week. By the end of the year, the earth will be covered in rabbits. (The Internet provides us an exponents calculator, you see. Try calculating 2^52.)

The vulnerabilities of computers, networks, and data may be growing. But such vulnerabilities are not a function of the number of transistors that can be placed on an integrated circuit. Baker is riffing on Moore’s Law, which describes long-term exponential growth in computing power.

Instead, vulnerabilities will generally be a function of the number of implementations of information technology. A new protocol may open one or more vulnerabilities. A new piece of software may have one or more vulnerabilities. A new chip design may have one or more vulnerabilities. Interactions between various protocols and pieces of hardware and software may create vulnerabilities. And so on. At worst, in some fields of information technology, there might be something like cubic growth in vulnerabilities, but it’s doubtful that such a trend could last.

Why? Because vulnerabilities are also regularly closing. Protocols get ironed out. Software bugs get patched. Bad chip designs get fixed.

There’s another dimension along which vulnerabilities are also probably growing. This would be a function of the “quantity” of information technology out there. If there are 10,000 instances of a given piece of software in use out there with a vulnerability, that’s 10,000 vulnerabilities. If there are 100,000 instances of it, that’s 10 times more vulnerabilities—but that’s still linear growth, not exponential growth. The number of vulnerabilities grows in direct proportion to the number of instances of the technology.

Ignore the downward pressure on vulnerabilities, though, and put growth in the number of vulnerabilities together with the growth in the propogation of vulnerabilities. Don’t you have exponential growth? No. You still have linear growth. The growth in vulnerability from new implementations of information technology and new instances of that technology multiply. Across technologies, they sum. They don’t act as exponents to one another.

Baker uses “vulnerability” and “threat” interchangeably, but careful thinkers about risk wouldn’t do this, I don’t think. Vulnerability is the existence of weakness. Threat is someone or something animated to exploit it (a “hazard” if that thing is inanimate). Vulnerabilities don’t really matter, in fact, if there isn’t anyone to exploit them. Do you worry about the number of hairs on your body being a source of pain? No, because nobody is going to come along and pluck them all. You need to have a threat vector, or vulnerability is just idle worry.

Now, threats can multiply quickly online. When exploits to some vulnerabilities are devised, their creators can propogate them quickly to others, such as “script kiddies” who will run such exploits everywhere they can. Hence, the significance of the “zero-day threat” and the importance of patching software promptly.

As to consequence, Baker cites examples of recent hacks on HBGary, RSA, Verisign, and DigiNotar, as well as weakness in industrial control systems. This says nothing about growth rates, much less how the number of hacks in the last year forms the basis for more in the next. If some hacks allow other hacks to be implemented, that, again, would be a multiplier, not an exponent. (Generally, these most worrisome hacks can’t be executed by script kiddes, so they are not soaring in numerosity. You know what happens to consequential hacks that do soar in numerosity? They’re foreclosed by patches.)

Vulnerability and threat analyses are inputs into determinations about the likelihood of bad things happening. The next step is to multiply that likelihood against consequence. The product is a sense of how important a given risk is. That’s risk assessment.

But Baker isn’t terribly interested in acute risk management. During his years as Assistant Secretary for Policy at the Department of Homeland Security, the agency didn’t do the risk management work that would validate or invalidate the strip-search machine/intrusive pat-down policy (and it still hasn’t, despite a court order). The bill he’s testifying in support of wouldn’t manage cybersecurity risks terribly well, either, for reasons I’ll articulate in a forthcoming post.

Do your representatives in Congress get the math involved here? Do they know the difference between exponential growth and linear growth? Do they “get” risk management? Chances are they don’t. They may even parrot the “statistic” that Baker is putting forth. How well equipped do you suppose a body like that is for telling you how to do your cybersecurity?

Making Sense of New TSA Procedures

Since they were announced recently, I’ve been working to make sense of new security procedures that TSA is applying to flights coming into the U.S.

“These new measures utilize real-time, threat-based intelligence along with multiple, random layers of security, both seen and unseen, to more effectively mitigate evolving terrorist threats,” says Secretary Napolitano.

That reveals essentially nothing of what they are, of course. Indeed, “For security reasons, the specific details of the directives are not public.”

But we in the public aren’t so many potted plants. We need to know what they are, both because our freedoms are at stake and because our tax money will be spent on these measures.

Let’s start at the beginning, with identity-based screening and watch-listing in general. A recent report in the New York Times sums it up nicely:

The watch list is actually a succession of lists, beginning with the Terrorist Identities Datamart Environment, or TIDE, a centralized database of potential suspects.  … [A]bout 10,000 names come in daily through intelligence reports, but … a large percentage are dismissed because they are based on “some combination of circular reporting, poison pens, mistaken identities, lies and so forth.”

Analysts at the counterterrorism center then work with the Terrorist Screening Center of the F.B.I. to add names to what is called the consolidated watch list, which may have any number of consequences for those on it, like questioning by the police during a traffic stop or additional screening crossing the border. That list, in turn, has various subsets, including the no-fly list and the selectee list, which requires passengers to undergo extra screening.

The consolidated list has the names of more than 400,000 people, about 97 percent of them foreigners, while the no-fly and selectee lists have about 6,000 and 20,000, respectively.

After the December 25, 2009 attempted bombing of a Northwest Airlines flight from Amsterdam into Detroit, TSA quickly established, then quickly lifted, an oppressive set of rules for travelers, including bans on blankets and on moving about the cabin during the latter stages of flights. In the day or two after a new attempt, security excesses of this kind are forgivable.

But TSA also established identity-based security rules of similar provenance and greater persistence, subjecting people from fourteen countries, mostly Muslim-dominated, to special security screening. This was ham-handed reaction, increasing security against the unlikely follow-on attacker by a tiny margin while driving wedges between the U.S. and people well positioned to help our security efforts.

Former DHS official Stewart Baker recently discussed the change to this policy on the Volokh Conspiracy blog:

The 14-country approach wasn’t a long-term solution.  So some time in January or February, with little fanfare, TSA seems to have begun doing something much more significant.  It borrowed a page from the Customs and Border Protection playbook, looking at all passengers on a flight, running intelligence checks on all of them, and then telling the airlines to give extra screening to the ones that looked risky.

Mark Ambinder lauded the new policy on the Atlantic blog, describing it thusly:

The new policy, for the first time, makes use of actual, vetted intelligence. In addition to the existing names on the “No Fly” and “Selectee” lists, the government will now provide unclassified descriptive information to domestic and international airlines and to foreign governments on a near-real time basis.

Likely, the change is, or is very much like, applying a Customs and Border Patrol program called ATS-P (Automated Targeting System - Passenger) to air travel screening.

“[ATS-P] compares Passenger Name Record (PNR) and information in [various] databases against lookouts and patterns of suspicious activity identified by analysts based upon past investigations and intelligence,” says this Congressional Research Service report.

“It was through application of the ATS-P that CBP officers at the National Targeting Center selected Umar Farouk Abdulmutallab, who attempted to detonate an explosive device on board Northwest Flight 253 on December 25, 2009, for further questioning upon his arrival at the Detroit Metropolitan Wayne County Airport.”

Is using ATS-P or something like it an improvement in the way airline security is being done? It probably is.

A watch-list works by comparing the names of travelers to the names of people that intelligence has deemed concerning. To simplify, the logic looks like something like this:

If first name=”Cat” (or variants) and last name=”Stevens”, then *flag!*

Using intelligence directly just broadens the identifiers you use, so the comparison (again simplified) might look something like this:

If biography contains “traveled in Yemen” or “Nigerian student” or “consorted with extremists”, then *flag!*

The ability to flag a potential terrorist with identifiers beyond name is a potential improvement. Such a screening system would be more flexible than one that used purely name-based matching. But using more identifiers isn’t automatically better.

The goal—unchanged—is to minimize both false positives and false negatives—that is, people flagged as potential terrorists who are not terrorists, and people not flagged as terrorists who are terrorists. A certain number of false positives are acceptable if that avoids false negatives, but a huge number of false positives will just waste resources relative to the margin of security the screening system creates. Given the overall paucity of terrorists—which is otherwise a good thing—it’s very easy to waste resources on screening.

I used the name “Cat Stevens” above because it’s one of several well known examples of logic that caused false positives. Utterly simplistic identifiers like “traveled in Yemen” will also increase false positives dramatically. More subtle combinations of identifiers and logic can do better. The questions are how far they increase false positives, and whether the logic is built on enough information to produce true positives.

So far as we know, ATS-P has never flagged a terrorist before it flagged the underwear bomber. DHS officials tried once to spin up a case in which ATS-P flagged someone who was involved in an Iraq car-bombing after being excluded from the U.S. However, I believe, as I suggested two years ago, that ATS-P flagged him as a likely visa overstayer and not as a terror threat. He may not have been a terror threat when flagged, as some reports have it that he descended into terrorism after being excluded from the U.S. This makes the incident at best an example of luck rather than skill. That I know of, nobody with knowledge of the incident has ever disputed my theory, which I think they would have done if they could.

The fact that ATS-P flagged one terrorist is poor evidence that it will “work” going forward. The program “working” in this case means that it finds true terrorists without flagging an unacceptable/overly costly number of non-terrorists.

Of course, different people are willing to accept different levels of cost to exclude terrorists from airplanes. I think I have come up with a good way to measure the benefits of screening systems like this so that costs and benefits can be compared, and the conversation can be focused.

Assume a motivated attacker that would eventually succeed. By approximating the amount of damage the attack might do and how long it would take to defeat the security measure, one can roughly estimate its value.

Say, for example, that a particular attack might cause one million dollars in damage. Delaying it for a year is worth $50,000 at a 5% interest rate. Delaying for a month an attack that would cause $10 billion in damage is worth about $42 million.

(I think it is fair to assume that any major attack will happen only once, as it will produce responses that prevent it happening twice. The devastating “commandeering” attack on air travel and infrastructure is instructive. The 9/11 attacks permanently changed the posture of air passengers toward hijackers, and subsequent hardening of cockpit doors has brought the chance of another commandeering attack very close to nil.)

A significant weakness of identity-based screening (which “intelligence-based” screening—if there’s a difference—shares) is that it is testable. A person may learn if he or she is flagged for extra scrutiny simply by traveling a few times. A person who passes through airport security six times in a two-month period and does not receive extra scrutiny can be confident enough on the seventh trip that he or she will not be specially screened. If a person does receive special scrutiny on test runs, that’s notice of being in a suspect category, so someone else should carry out a planned attack.

(“We’ll make traveling often a ground for suspicion!” might go the answer. “False positives,” my rejoinder.)

Assuming that it takes two months more than it otherwise would to recruit and clear a clean-skin terrorist, as Al Qaeda and Al Qaeda franchises have done, the dollar value of screening is $125 million. That is the amount saved (at a 5% interest rate) by delaying for one month an attack costing $15 billion (a RAND corporation estimate of the total cost of a downed airliner, public reactions included).

Let’s say that the agility of having non-name identifiers does improve screening and causes it to take three months rather than two to find a candidate who can pass through the screen. Ignoring the costs of additional false positives (though they could be very high), the value of screening rises to $187.5 million.

(There is plenty of room to push and pull on all these assumptions. I welcome comments on both the assumptions and the logic of using the time-value of delayed attacks to quantify the benefits of security programs.)

A January 2009 study entitled, “Just How Much Does That Cost, Anyway? An Analysis of the Financial Costs and Benefits of the ‘No-Fly’ List,” put the amount expended on “no-fly” listing up to that time at between $300 million and $966 million, with a medium estimate of $536 million. The study estimated yearly costs at between $51 and $161 million, with a medium estimate of $89 million.

The new screening procedures, whose contours are largely speculative, may improve air security by some margin. Their additional costs are probably unknown to anyone yet as false positive rates have yet to be determined, and the system has yet to be calibrated. Under the generous assumption that this change makes it 50% harder to defeat the screening system, the value of screening rises, mitigating the ongoing loss that identity-based screening appears to bring to our overall welfare.

Hey, if you’ve read this far, you’ll probably go one or two more paragraphs…

It’s worth noting how the practice of “security by obscurity” degrades the capacity of outside critics to contribute to the improvement of homeland security programs. Keeping the contours of this system secret requires people like me to guess at what it is and how it works, so my assessment of its strengths and weaknesses is necessarily degraded. As usual, Bruce Schneier has smart things to say on security by obscurity, building on security principles generated over more than 125 years in the field of cryptography.

DHS could tell the public a great deal more about what it is doing. There is no good reason for the security bureaucracy to insist on going mano a mano against terrorism, turning away the many resources of the broader society. The margin of information the United States’ enemies might access would be more than made up for by the strength our security programs would gain from independent criticism and testing.